I0520 23:37:40.248208 8 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0520 23:37:40.248436 8 e2e.go:129] Starting e2e run "9712ad4d-9dac-49d9-a306-bb8bcb5fd1e8" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590017859 - Will randomize all specs Will run 288 of 5095 specs May 20 23:37:40.315: INFO: >>> kubeConfig: /root/.kube/config May 20 23:37:40.317: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 23:37:40.340: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 23:37:40.377: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 23:37:40.377: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 20 23:37:40.377: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 23:37:40.389: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 20 23:37:40.389: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 23:37:40.389: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 20 23:37:40.390: INFO: kube-apiserver version: v1.18.2 May 20 23:37:40.390: INFO: >>> kubeConfig: /root/.kube/config May 20 23:37:40.396: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:37:40.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition May 20 23:37:40.458: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:37:40.460: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:37:46.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1172" for this suite. • [SLOW TEST:6.422 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:37:46.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:38:03.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2999" for this suite. • [SLOW TEST:16.276 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":2,"skipped":67,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:38:03.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 23:38:03.895: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 23:38:05.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614683, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614683, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614684, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614683, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 23:38:07.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614683, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614683, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614684, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725614683, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 23:38:10.935: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:38:10.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2939-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:38:12.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-434" for this suite. STEP: Destroying namespace "webhook-434-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.121 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":3,"skipped":70,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:38:12.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-383b17a6-8c81-49f4-a91c-3028ebc69688 in namespace container-probe-9995 May 20 23:38:16.286: INFO: Started pod busybox-383b17a6-8c81-49f4-a91c-3028ebc69688 in namespace container-probe-9995 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:38:16.289: INFO: Initial restart count of pod busybox-383b17a6-8c81-49f4-a91c-3028ebc69688 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:42:16.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9995" for this suite. • [SLOW TEST:244.782 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":84,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:42:16.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:42:17.102: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 20 23:42:17.111: INFO: Number of nodes with available pods: 0 May 20 23:42:17.111: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 20 23:42:17.202: INFO: Number of nodes with available pods: 0 May 20 23:42:17.202: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:18.206: INFO: Number of nodes with available pods: 0 May 20 23:42:18.206: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:19.272: INFO: Number of nodes with available pods: 0 May 20 23:42:19.272: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:20.206: INFO: Number of nodes with available pods: 0 May 20 23:42:20.206: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:21.206: INFO: Number of nodes with available pods: 0 May 20 23:42:21.206: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:22.220: INFO: Number of nodes with available pods: 1 May 20 23:42:22.220: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 20 23:42:22.259: INFO: Number of nodes with available pods: 1 May 20 23:42:22.259: INFO: Number of running nodes: 0, number of available pods: 1 May 20 23:42:23.264: INFO: Number of nodes with available pods: 0 May 20 23:42:23.264: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 20 23:42:23.281: INFO: Number of nodes with available pods: 0 May 20 23:42:23.281: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:24.286: INFO: Number of nodes with available pods: 0 May 20 23:42:24.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:25.286: INFO: Number of nodes with available pods: 0 May 20 23:42:25.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:26.286: INFO: Number of nodes with available pods: 0 May 20 23:42:26.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:27.286: INFO: Number of nodes with available pods: 0 May 20 23:42:27.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:28.286: INFO: Number of nodes with available pods: 0 May 20 23:42:28.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:29.311: INFO: Number of nodes with available pods: 0 May 20 23:42:29.311: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:30.285: INFO: Number of nodes with available pods: 0 May 20 23:42:30.285: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:31.299: INFO: Number of nodes with available pods: 0 May 20 23:42:31.299: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:32.285: INFO: Number of nodes with available pods: 0 May 20 23:42:32.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:33.310: INFO: Number of nodes with available pods: 0 May 20 23:42:33.310: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:34.286: INFO: Number of nodes with available pods: 0 May 20 23:42:34.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:35.320: INFO: Number of nodes with available pods: 0 May 20 23:42:35.320: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:36.286: INFO: Number of nodes with available pods: 0 May 20 23:42:36.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:37.286: INFO: Number of nodes with available pods: 0 May 20 23:42:37.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:38.286: INFO: Number of nodes with available pods: 0 May 20 23:42:38.286: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:42:39.286: INFO: Number of nodes with available pods: 1 May 20 23:42:39.286: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1242, will wait for the garbage collector to delete the pods May 20 23:42:39.352: INFO: Deleting DaemonSet.extensions daemon-set took: 6.435431ms May 20 23:42:39.652: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.241676ms May 20 23:42:45.274: INFO: Number of nodes with available pods: 0 May 20 23:42:45.274: INFO: Number of running nodes: 0, number of available pods: 0 May 20 23:42:45.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1242/daemonsets","resourceVersion":"6340533"},"items":null} May 20 23:42:45.281: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1242/pods","resourceVersion":"6340533"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:42:45.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1242" for this suite. • [SLOW TEST:28.325 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":5,"skipped":93,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:42:45.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:42:58.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1833" for this suite. • [SLOW TEST:13.198 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":6,"skipped":130,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:42:58.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-971c20ae-4dd8-4506-a64d-0f61cbadbdde [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:42:58.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5664" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":7,"skipped":133,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:42:58.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6531 STEP: creating service affinity-nodeport in namespace services-6531 STEP: creating replication controller affinity-nodeport in namespace services-6531 I0520 23:42:58.731984 8 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6531, replica count: 3 I0520 23:43:01.782425 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:43:04.782718 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 23:43:04.792: INFO: Creating new exec pod May 20 23:43:09.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6531 execpod-affinityvrt9t -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 20 23:43:12.712: INFO: stderr: "I0520 23:43:12.567296 32 log.go:172] (0xc00003ac60) (0xc0001aebe0) Create stream\nI0520 23:43:12.567355 32 log.go:172] (0xc00003ac60) (0xc0001aebe0) Stream added, broadcasting: 1\nI0520 23:43:12.570046 32 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0520 23:43:12.570116 32 log.go:172] (0xc00003ac60) (0xc00002c3c0) Create stream\nI0520 23:43:12.570139 32 log.go:172] (0xc00003ac60) (0xc00002c3c0) Stream added, broadcasting: 3\nI0520 23:43:12.571130 32 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0520 23:43:12.571158 32 log.go:172] (0xc00003ac60) (0xc00001a500) Create stream\nI0520 23:43:12.571169 32 log.go:172] (0xc00003ac60) (0xc00001a500) Stream added, broadcasting: 5\nI0520 23:43:12.572274 32 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0520 23:43:12.692822 32 log.go:172] (0xc00003ac60) Data frame received for 5\nI0520 23:43:12.692849 32 log.go:172] (0xc00001a500) (5) Data frame handling\nI0520 23:43:12.692870 32 log.go:172] (0xc00001a500) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0520 23:43:12.705493 32 log.go:172] (0xc00003ac60) Data frame received for 5\nI0520 23:43:12.705527 32 log.go:172] (0xc00001a500) (5) Data frame handling\nI0520 23:43:12.705558 32 log.go:172] (0xc00001a500) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0520 23:43:12.705653 32 log.go:172] (0xc00003ac60) Data frame received for 5\nI0520 23:43:12.705672 32 log.go:172] (0xc00001a500) (5) Data frame handling\nI0520 23:43:12.706061 32 log.go:172] (0xc00003ac60) Data frame received for 3\nI0520 23:43:12.706079 32 log.go:172] (0xc00002c3c0) (3) Data frame handling\nI0520 23:43:12.707739 32 log.go:172] (0xc00003ac60) Data frame received for 1\nI0520 23:43:12.707768 32 log.go:172] (0xc0001aebe0) (1) Data frame handling\nI0520 23:43:12.707790 32 log.go:172] (0xc0001aebe0) (1) Data frame sent\nI0520 23:43:12.707808 32 log.go:172] (0xc00003ac60) (0xc0001aebe0) Stream removed, broadcasting: 1\nI0520 23:43:12.707833 32 log.go:172] (0xc00003ac60) Go away received\nI0520 23:43:12.708148 32 log.go:172] (0xc00003ac60) (0xc0001aebe0) Stream removed, broadcasting: 1\nI0520 23:43:12.708167 32 log.go:172] (0xc00003ac60) (0xc00002c3c0) Stream removed, broadcasting: 3\nI0520 23:43:12.708179 32 log.go:172] (0xc00003ac60) (0xc00001a500) Stream removed, broadcasting: 5\n" May 20 23:43:12.713: INFO: stdout: "" May 20 23:43:12.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6531 execpod-affinityvrt9t -- /bin/sh -x -c nc -zv -t -w 2 10.103.122.191 80' May 20 23:43:12.922: INFO: stderr: "I0520 23:43:12.850002 64 log.go:172] (0xc0009bc8f0) (0xc0000dda40) Create stream\nI0520 23:43:12.850060 64 log.go:172] (0xc0009bc8f0) (0xc0000dda40) Stream added, broadcasting: 1\nI0520 23:43:12.852523 64 log.go:172] (0xc0009bc8f0) Reply frame received for 1\nI0520 23:43:12.852569 64 log.go:172] (0xc0009bc8f0) (0xc0004f5400) Create stream\nI0520 23:43:12.852583 64 log.go:172] (0xc0009bc8f0) (0xc0004f5400) Stream added, broadcasting: 3\nI0520 23:43:12.853809 64 log.go:172] (0xc0009bc8f0) Reply frame received for 3\nI0520 23:43:12.853856 64 log.go:172] (0xc0009bc8f0) (0xc000430280) Create stream\nI0520 23:43:12.853877 64 log.go:172] (0xc0009bc8f0) (0xc000430280) Stream added, broadcasting: 5\nI0520 23:43:12.854719 64 log.go:172] (0xc0009bc8f0) Reply frame received for 5\nI0520 23:43:12.914087 64 log.go:172] (0xc0009bc8f0) Data frame received for 5\nI0520 23:43:12.914141 64 log.go:172] (0xc000430280) (5) Data frame handling\nI0520 23:43:12.914157 64 log.go:172] (0xc000430280) (5) Data frame sent\n+ nc -zv -t -w 2 10.103.122.191 80\nConnection to 10.103.122.191 80 port [tcp/http] succeeded!\nI0520 23:43:12.914188 64 log.go:172] (0xc0009bc8f0) Data frame received for 3\nI0520 23:43:12.914225 64 log.go:172] (0xc0004f5400) (3) Data frame handling\nI0520 23:43:12.914253 64 log.go:172] (0xc0009bc8f0) Data frame received for 5\nI0520 23:43:12.914272 64 log.go:172] (0xc000430280) (5) Data frame handling\nI0520 23:43:12.915473 64 log.go:172] (0xc0009bc8f0) Data frame received for 1\nI0520 23:43:12.915499 64 log.go:172] (0xc0000dda40) (1) Data frame handling\nI0520 23:43:12.915507 64 log.go:172] (0xc0000dda40) (1) Data frame sent\nI0520 23:43:12.915516 64 log.go:172] (0xc0009bc8f0) (0xc0000dda40) Stream removed, broadcasting: 1\nI0520 23:43:12.915524 64 log.go:172] (0xc0009bc8f0) Go away received\nI0520 23:43:12.916103 64 log.go:172] (0xc0009bc8f0) (0xc0000dda40) Stream removed, broadcasting: 1\nI0520 23:43:12.916130 64 log.go:172] (0xc0009bc8f0) (0xc0004f5400) Stream removed, broadcasting: 3\nI0520 23:43:12.916141 64 log.go:172] (0xc0009bc8f0) (0xc000430280) Stream removed, broadcasting: 5\n" May 20 23:43:12.922: INFO: stdout: "" May 20 23:43:12.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6531 execpod-affinityvrt9t -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30746' May 20 23:43:13.142: INFO: stderr: "I0520 23:43:13.065400 83 log.go:172] (0xc000a91340) (0xc000b16320) Create stream\nI0520 23:43:13.065472 83 log.go:172] (0xc000a91340) (0xc000b16320) Stream added, broadcasting: 1\nI0520 23:43:13.069810 83 log.go:172] (0xc000a91340) Reply frame received for 1\nI0520 23:43:13.069864 83 log.go:172] (0xc000a91340) (0xc0006de6e0) Create stream\nI0520 23:43:13.069885 83 log.go:172] (0xc000a91340) (0xc0006de6e0) Stream added, broadcasting: 3\nI0520 23:43:13.070599 83 log.go:172] (0xc000a91340) Reply frame received for 3\nI0520 23:43:13.070633 83 log.go:172] (0xc000a91340) (0xc000678640) Create stream\nI0520 23:43:13.070642 83 log.go:172] (0xc000a91340) (0xc000678640) Stream added, broadcasting: 5\nI0520 23:43:13.071372 83 log.go:172] (0xc000a91340) Reply frame received for 5\nI0520 23:43:13.134166 83 log.go:172] (0xc000a91340) Data frame received for 5\nI0520 23:43:13.134208 83 log.go:172] (0xc000678640) (5) Data frame handling\nI0520 23:43:13.134248 83 log.go:172] (0xc000678640) (5) Data frame sent\nI0520 23:43:13.134268 83 log.go:172] (0xc000a91340) Data frame received for 5\nI0520 23:43:13.134280 83 log.go:172] (0xc000678640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30746\nConnection to 172.17.0.13 30746 port [tcp/30746] succeeded!\nI0520 23:43:13.134410 83 log.go:172] (0xc000678640) (5) Data frame sent\nI0520 23:43:13.134569 83 log.go:172] (0xc000a91340) Data frame received for 5\nI0520 23:43:13.134609 83 log.go:172] (0xc000678640) (5) Data frame handling\nI0520 23:43:13.134825 83 log.go:172] (0xc000a91340) Data frame received for 3\nI0520 23:43:13.134854 83 log.go:172] (0xc0006de6e0) (3) Data frame handling\nI0520 23:43:13.136801 83 log.go:172] (0xc000a91340) Data frame received for 1\nI0520 23:43:13.136844 83 log.go:172] (0xc000b16320) (1) Data frame handling\nI0520 23:43:13.136881 83 log.go:172] (0xc000b16320) (1) Data frame sent\nI0520 23:43:13.136915 83 log.go:172] (0xc000a91340) (0xc000b16320) Stream removed, broadcasting: 1\nI0520 23:43:13.136990 83 log.go:172] (0xc000a91340) Go away received\nI0520 23:43:13.137726 83 log.go:172] (0xc000a91340) (0xc000b16320) Stream removed, broadcasting: 1\nI0520 23:43:13.137750 83 log.go:172] (0xc000a91340) (0xc0006de6e0) Stream removed, broadcasting: 3\nI0520 23:43:13.137769 83 log.go:172] (0xc000a91340) (0xc000678640) Stream removed, broadcasting: 5\n" May 20 23:43:13.142: INFO: stdout: "" May 20 23:43:13.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6531 execpod-affinityvrt9t -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30746' May 20 23:43:13.363: INFO: stderr: "I0520 23:43:13.289425 103 log.go:172] (0xc0009e0790) (0xc0006dcdc0) Create stream\nI0520 23:43:13.289481 103 log.go:172] (0xc0009e0790) (0xc0006dcdc0) Stream added, broadcasting: 1\nI0520 23:43:13.295278 103 log.go:172] (0xc0009e0790) Reply frame received for 1\nI0520 23:43:13.295313 103 log.go:172] (0xc0009e0790) (0xc0005fe140) Create stream\nI0520 23:43:13.295323 103 log.go:172] (0xc0009e0790) (0xc0005fe140) Stream added, broadcasting: 3\nI0520 23:43:13.296400 103 log.go:172] (0xc0009e0790) Reply frame received for 3\nI0520 23:43:13.296422 103 log.go:172] (0xc0009e0790) (0xc0005fe6e0) Create stream\nI0520 23:43:13.296430 103 log.go:172] (0xc0009e0790) (0xc0005fe6e0) Stream added, broadcasting: 5\nI0520 23:43:13.297651 103 log.go:172] (0xc0009e0790) Reply frame received for 5\nI0520 23:43:13.355340 103 log.go:172] (0xc0009e0790) Data frame received for 3\nI0520 23:43:13.355399 103 log.go:172] (0xc0005fe140) (3) Data frame handling\nI0520 23:43:13.355430 103 log.go:172] (0xc0009e0790) Data frame received for 5\nI0520 23:43:13.355444 103 log.go:172] (0xc0005fe6e0) (5) Data frame handling\nI0520 23:43:13.355458 103 log.go:172] (0xc0005fe6e0) (5) Data frame sent\nI0520 23:43:13.355470 103 log.go:172] (0xc0009e0790) Data frame received for 5\nI0520 23:43:13.355481 103 log.go:172] (0xc0005fe6e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30746\nConnection to 172.17.0.12 30746 port [tcp/30746] succeeded!\nI0520 23:43:13.357475 103 log.go:172] (0xc0009e0790) Data frame received for 1\nI0520 23:43:13.357516 103 log.go:172] (0xc0006dcdc0) (1) Data frame handling\nI0520 23:43:13.357541 103 log.go:172] (0xc0006dcdc0) (1) Data frame sent\nI0520 23:43:13.357585 103 log.go:172] (0xc0009e0790) (0xc0006dcdc0) Stream removed, broadcasting: 1\nI0520 23:43:13.357995 103 log.go:172] (0xc0009e0790) (0xc0006dcdc0) Stream removed, broadcasting: 1\nI0520 23:43:13.358018 103 log.go:172] (0xc0009e0790) (0xc0005fe140) Stream removed, broadcasting: 3\nI0520 23:43:13.358029 103 log.go:172] (0xc0009e0790) (0xc0005fe6e0) Stream removed, broadcasting: 5\n" May 20 23:43:13.363: INFO: stdout: "" May 20 23:43:13.364: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6531 execpod-affinityvrt9t -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30746/ ; done' May 20 23:43:13.739: INFO: stderr: "I0520 23:43:13.515750 123 log.go:172] (0xc000c02630) (0xc000938320) Create stream\nI0520 23:43:13.515834 123 log.go:172] (0xc000c02630) (0xc000938320) Stream added, broadcasting: 1\nI0520 23:43:13.519351 123 log.go:172] (0xc000c02630) Reply frame received for 1\nI0520 23:43:13.519411 123 log.go:172] (0xc000c02630) (0xc00091e460) Create stream\nI0520 23:43:13.519432 123 log.go:172] (0xc000c02630) (0xc00091e460) Stream added, broadcasting: 3\nI0520 23:43:13.520427 123 log.go:172] (0xc000c02630) Reply frame received for 3\nI0520 23:43:13.520483 123 log.go:172] (0xc000c02630) (0xc00091ec80) Create stream\nI0520 23:43:13.520507 123 log.go:172] (0xc000c02630) (0xc00091ec80) Stream added, broadcasting: 5\nI0520 23:43:13.521591 123 log.go:172] (0xc000c02630) Reply frame received for 5\nI0520 23:43:13.587003 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.587027 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.587035 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.587048 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.587053 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.587059 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.651220 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.651249 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.651269 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.651891 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.651912 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.651924 123 log.go:172] (0xc00091ec80) (5) Data frame sent\nI0520 23:43:13.651935 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.651952 123 log.go:172] (0xc00091ec80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/I0520 23:43:13.651972 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.651984 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.651998 123 log.go:172] (0xc00091e460) (3) Data frame sent\n\nI0520 23:43:13.652043 123 log.go:172] (0xc00091ec80) (5) Data frame sent\nI0520 23:43:13.658668 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.658697 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.658717 123 log.go:172] (0xc00091ec80) (5) Data frame sent\nI0520 23:43:13.658755 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.658771 123 log.go:172] (0xc00091ec80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.658810 123 log.go:172] (0xc00091ec80) (5) Data frame sent\nI0520 23:43:13.658848 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.658866 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.658885 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.658901 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.658930 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.658968 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.664299 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.664321 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.664336 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.664932 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.664950 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.664965 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.664983 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.664991 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.665005 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.668894 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.668909 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.668917 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.669650 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.669668 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.669681 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.669697 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.669705 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.669713 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.672493 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.672506 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.672515 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.673552 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.673566 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.673580 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.673594 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.673632 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.673654 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.679217 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.679247 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.679272 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.679948 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.679960 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.679967 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.680000 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.680017 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.680040 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.686302 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.686332 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.686368 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.686877 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.686916 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.686933 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.686957 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.686972 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.687000 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.690953 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.690984 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.691007 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.692159 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.692194 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.692216 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.692288 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.692310 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.692329 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.698255 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.698278 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.698297 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.699018 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.699042 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.699075 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.699089 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.699102 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.699110 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.704299 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.704319 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.704336 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.705317 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.705362 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.705381 123 log.go:172] (0xc00091ec80) (5) Data frame sent\nI0520 23:43:13.705395 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.705410 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.705426 123 log.go:172] (0xc00091e460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.709880 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.709898 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.709907 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.710412 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.710452 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.710470 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.710496 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.710518 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.710540 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.714412 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.714428 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.714445 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.714909 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.714934 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.714945 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.714959 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.714966 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.714975 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.718462 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.718480 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.718494 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.718874 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.718899 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.718908 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.718918 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.718923 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.718929 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.722457 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.722469 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.722474 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.722865 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.722886 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.722899 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.722912 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.722918 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.722924 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.726929 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.726954 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.726978 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.727580 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.727591 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.727604 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.727627 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.727637 123 log.go:172] (0xc00091ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30746/\nI0520 23:43:13.727651 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.731820 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.731847 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.731877 123 log.go:172] (0xc00091e460) (3) Data frame sent\nI0520 23:43:13.732421 123 log.go:172] (0xc000c02630) Data frame received for 3\nI0520 23:43:13.732440 123 log.go:172] (0xc00091e460) (3) Data frame handling\nI0520 23:43:13.732723 123 log.go:172] (0xc000c02630) Data frame received for 5\nI0520 23:43:13.732736 123 log.go:172] (0xc00091ec80) (5) Data frame handling\nI0520 23:43:13.734346 123 log.go:172] (0xc000c02630) Data frame received for 1\nI0520 23:43:13.734370 123 log.go:172] (0xc000938320) (1) Data frame handling\nI0520 23:43:13.734384 123 log.go:172] (0xc000938320) (1) Data frame sent\nI0520 23:43:13.734399 123 log.go:172] (0xc000c02630) (0xc000938320) Stream removed, broadcasting: 1\nI0520 23:43:13.734484 123 log.go:172] (0xc000c02630) Go away received\nI0520 23:43:13.734785 123 log.go:172] (0xc000c02630) (0xc000938320) Stream removed, broadcasting: 1\nI0520 23:43:13.734802 123 log.go:172] (0xc000c02630) (0xc00091e460) Stream removed, broadcasting: 3\nI0520 23:43:13.734816 123 log.go:172] (0xc000c02630) (0xc00091ec80) Stream removed, broadcasting: 5\n" May 20 23:43:13.739: INFO: stdout: "\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j\naffinity-nodeport-rjb4j" May 20 23:43:13.739: INFO: Received response from host: May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Received response from host: affinity-nodeport-rjb4j May 20 23:43:13.739: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6531, will wait for the garbage collector to delete the pods May 20 23:43:13.830: INFO: Deleting ReplicationController affinity-nodeport took: 6.86826ms May 20 23:43:14.331: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.185627ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:25.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6531" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.810 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":8,"skipped":147,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:25.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:25.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9662" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":9,"skipped":167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:25.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:43:25.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9022' May 20 23:43:25.907: INFO: stderr: "" May 20 23:43:25.907: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 20 23:43:25.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9022' May 20 23:43:26.299: INFO: stderr: "" May 20 23:43:26.299: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 20 23:43:27.303: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:43:27.303: INFO: Found 0 / 1 May 20 23:43:28.303: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:43:28.303: INFO: Found 0 / 1 May 20 23:43:29.304: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:43:29.304: INFO: Found 1 / 1 May 20 23:43:29.304: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 23:43:29.306: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:43:29.306: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 23:43:29.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-46hrr --namespace=kubectl-9022' May 20 23:43:29.424: INFO: stderr: "" May 20 23:43:29.424: INFO: stdout: "Name: agnhost-master-46hrr\nNamespace: kubectl-9022\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Wed, 20 May 2020 23:43:25 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.47\nIPs:\n IP: 10.244.1.47\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://de684a45cff457c2f965e1f5b329a42d6f4b04614d866f179d81cfa27b3e68c4\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 20 May 2020 23:43:28 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-h7zdp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-h7zdp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-h7zdp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-9022/agnhost-master-46hrr to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" May 20 23:43:29.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9022' May 20 23:43:29.572: INFO: stderr: "" May 20 23:43:29.572: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9022\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-46hrr\n" May 20 23:43:29.572: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9022' May 20 23:43:29.691: INFO: stderr: "" May 20 23:43:29.692: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9022\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.189.228\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.47:6379\nSession Affinity: None\nEvents: \n" May 20 23:43:29.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 20 23:43:29.828: INFO: stderr: "" May 20 23:43:29.828: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Wed, 20 May 2020 23:43:27 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 20 May 2020 23:39:55 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 20 May 2020 23:39:55 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 20 May 2020 23:39:55 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 20 May 2020 23:39:55 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 21d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 21d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 21d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 21d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 20 23:43:29.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-9022' May 20 23:43:29.950: INFO: stderr: "" May 20 23:43:29.950: INFO: stdout: "Name: kubectl-9022\nLabels: e2e-framework=kubectl\n e2e-run=9712ad4d-9dac-49d9-a306-bb8bcb5fd1e8\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:29.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9022" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":10,"skipped":198,"failed":0} ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:29.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 20 23:43:30.054: INFO: Pod name pod-release: Found 0 pods out of 1 May 20 23:43:35.077: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:35.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8593" for this suite. • [SLOW TEST:5.231 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":11,"skipped":198,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:35.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:35.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3389" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":12,"skipped":209,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:35.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 20 23:43:35.507: INFO: Waiting up to 5m0s for pod "pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580" in namespace "emptydir-5314" to be "Succeeded or Failed" May 20 23:43:35.538: INFO: Pod "pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580": Phase="Pending", Reason="", readiness=false. Elapsed: 31.212612ms May 20 23:43:37.542: INFO: Pod "pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034893984s May 20 23:43:39.546: INFO: Pod "pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038643722s May 20 23:43:41.550: INFO: Pod "pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580": Phase="Running", Reason="", readiness=true. Elapsed: 6.042748972s May 20 23:43:43.555: INFO: Pod "pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047472939s STEP: Saw pod success May 20 23:43:43.555: INFO: Pod "pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580" satisfied condition "Succeeded or Failed" May 20 23:43:43.558: INFO: Trying to get logs from node latest-worker pod pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580 container test-container: STEP: delete the pod May 20 23:43:43.603: INFO: Waiting for pod pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580 to disappear May 20 23:43:43.618: INFO: Pod pod-1bc9bbf3-eb12-4e54-9d52-cd2162392580 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:43.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5314" for this suite. • [SLOW TEST:8.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":216,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:43.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f7dd3f7d-4fff-4688-9684-883ce0ed2cb3 STEP: Creating a pod to test consume secrets May 20 23:43:43.703: INFO: Waiting up to 5m0s for pod "pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8" in namespace "secrets-6963" to be "Succeeded or Failed" May 20 23:43:43.717: INFO: Pod "pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.982469ms May 20 23:43:45.721: INFO: Pod "pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017532819s May 20 23:43:47.726: INFO: Pod "pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022517986s STEP: Saw pod success May 20 23:43:47.726: INFO: Pod "pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8" satisfied condition "Succeeded or Failed" May 20 23:43:47.729: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8 container secret-volume-test: STEP: delete the pod May 20 23:43:47.779: INFO: Waiting for pod pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8 to disappear May 20 23:43:47.807: INFO: Pod pod-secrets-e597f4ad-6111-4241-853f-b36e54392cc8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:47.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6963" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":14,"skipped":221,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:47.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 20 23:43:47.877: INFO: Waiting up to 5m0s for pod "pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0" in namespace "emptydir-3985" to be "Succeeded or Failed" May 20 23:43:47.882: INFO: Pod "pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.014907ms May 20 23:43:49.886: INFO: Pod "pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008991239s May 20 23:43:51.891: INFO: Pod "pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013973726s STEP: Saw pod success May 20 23:43:51.891: INFO: Pod "pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0" satisfied condition "Succeeded or Failed" May 20 23:43:51.895: INFO: Trying to get logs from node latest-worker pod pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0 container test-container: STEP: delete the pod May 20 23:43:51.959: INFO: Waiting for pod pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0 to disappear May 20 23:43:51.972: INFO: Pod pod-96da3138-0e15-4bfb-aa9c-6d1c5cd0b8a0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:43:51.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3985" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":15,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:43:51.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:43:52.027: INFO: Creating ReplicaSet my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786 May 20 23:43:52.056: INFO: Pod name my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786: Found 0 pods out of 1 May 20 23:43:57.060: INFO: Pod name my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786: Found 1 pods out of 1 May 20 23:43:57.060: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786" is running May 20 23:43:57.063: INFO: Pod "my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786-rkmqr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 23:43:52 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 23:43:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 23:43:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-20 23:43:52 +0000 UTC Reason: Message:}]) May 20 23:43:57.064: INFO: Trying to dial the pod May 20 23:44:02.076: INFO: Controller my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786: Got expected result from replica 1 [my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786-rkmqr]: "my-hostname-basic-c77aea3f-e4f6-46f4-be5d-7240545bb786-rkmqr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:44:02.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6945" for this suite. • [SLOW TEST:10.103 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":16,"skipped":244,"failed":0} S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:44:02.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:44:02.295: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Pending, waiting for it to be Running (with Ready = true) May 20 23:44:04.300: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Pending, waiting for it to be Running (with Ready = true) May 20 23:44:06.300: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:08.299: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:10.300: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:12.300: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:14.299: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:16.300: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:18.299: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:20.300: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:22.301: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = false) May 20 23:44:24.300: INFO: The status of Pod test-webserver-0ba14fe5-500b-48fc-8fb5-f7d8f7802b6b is Running (Ready = true) May 20 23:44:24.304: INFO: Container started at 2020-05-20 23:44:05 +0000 UTC, pod became ready at 2020-05-20 23:44:23 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:44:24.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2151" for this suite. • [SLOW TEST:22.229 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":245,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:44:24.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-90c475fb-eb1c-4aca-af81-8fef1aaed373 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:44:24.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4577" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":18,"skipped":251,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:44:24.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1434 May 20 23:44:28.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 20 23:44:28.952: INFO: stderr: "I0520 23:44:28.742629 296 log.go:172] (0xc00095a000) (0xc00051c140) Create stream\nI0520 23:44:28.742721 296 log.go:172] (0xc00095a000) (0xc00051c140) Stream added, broadcasting: 1\nI0520 23:44:28.745920 296 log.go:172] (0xc00095a000) Reply frame received for 1\nI0520 23:44:28.745956 296 log.go:172] (0xc00095a000) (0xc000386640) Create stream\nI0520 23:44:28.745966 296 log.go:172] (0xc00095a000) (0xc000386640) Stream added, broadcasting: 3\nI0520 23:44:28.746808 296 log.go:172] (0xc00095a000) Reply frame received for 3\nI0520 23:44:28.746837 296 log.go:172] (0xc00095a000) (0xc00051d540) Create stream\nI0520 23:44:28.746854 296 log.go:172] (0xc00095a000) (0xc00051d540) Stream added, broadcasting: 5\nI0520 23:44:28.747597 296 log.go:172] (0xc00095a000) Reply frame received for 5\nI0520 23:44:28.868366 296 log.go:172] (0xc00095a000) Data frame received for 5\nI0520 23:44:28.868399 296 log.go:172] (0xc00051d540) (5) Data frame handling\nI0520 23:44:28.868425 296 log.go:172] (0xc00051d540) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0520 23:44:28.944367 296 log.go:172] (0xc00095a000) Data frame received for 3\nI0520 23:44:28.944413 296 log.go:172] (0xc000386640) (3) Data frame handling\nI0520 23:44:28.944452 296 log.go:172] (0xc000386640) (3) Data frame sent\nI0520 23:44:28.944521 296 log.go:172] (0xc00095a000) Data frame received for 3\nI0520 23:44:28.944554 296 log.go:172] (0xc000386640) (3) Data frame handling\nI0520 23:44:28.944748 296 log.go:172] (0xc00095a000) Data frame received for 5\nI0520 23:44:28.944767 296 log.go:172] (0xc00051d540) (5) Data frame handling\nI0520 23:44:28.946738 296 log.go:172] (0xc00095a000) Data frame received for 1\nI0520 23:44:28.946774 296 log.go:172] (0xc00051c140) (1) Data frame handling\nI0520 23:44:28.946813 296 log.go:172] (0xc00051c140) (1) Data frame sent\nI0520 23:44:28.946881 296 log.go:172] (0xc00095a000) (0xc00051c140) Stream removed, broadcasting: 1\nI0520 23:44:28.946920 296 log.go:172] (0xc00095a000) Go away received\nI0520 23:44:28.947344 296 log.go:172] (0xc00095a000) (0xc00051c140) Stream removed, broadcasting: 1\nI0520 23:44:28.947371 296 log.go:172] (0xc00095a000) (0xc000386640) Stream removed, broadcasting: 3\nI0520 23:44:28.947385 296 log.go:172] (0xc00095a000) (0xc00051d540) Stream removed, broadcasting: 5\n" May 20 23:44:28.952: INFO: stdout: "iptables" May 20 23:44:28.952: INFO: proxyMode: iptables May 20 23:44:28.957: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 23:44:28.983: INFO: Pod kube-proxy-mode-detector still exists May 20 23:44:30.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 23:44:30.988: INFO: Pod kube-proxy-mode-detector still exists May 20 23:44:32.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 23:44:32.987: INFO: Pod kube-proxy-mode-detector still exists May 20 23:44:34.984: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 23:44:34.987: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1434 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1434 I0520 23:44:35.027313 8 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1434, replica count: 3 I0520 23:44:38.077697 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:44:41.077953 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 23:44:41.089: INFO: Creating new exec pod May 20 23:44:46.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 execpod-affinitylsdzr -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 20 23:44:46.341: INFO: stderr: "I0520 23:44:46.259113 319 log.go:172] (0xc000b01340) (0xc00081dea0) Create stream\nI0520 23:44:46.259184 319 log.go:172] (0xc000b01340) (0xc00081dea0) Stream added, broadcasting: 1\nI0520 23:44:46.264570 319 log.go:172] (0xc000b01340) Reply frame received for 1\nI0520 23:44:46.264610 319 log.go:172] (0xc000b01340) (0xc000808c80) Create stream\nI0520 23:44:46.264618 319 log.go:172] (0xc000b01340) (0xc000808c80) Stream added, broadcasting: 3\nI0520 23:44:46.265562 319 log.go:172] (0xc000b01340) Reply frame received for 3\nI0520 23:44:46.265596 319 log.go:172] (0xc000b01340) (0xc0007fa500) Create stream\nI0520 23:44:46.265606 319 log.go:172] (0xc000b01340) (0xc0007fa500) Stream added, broadcasting: 5\nI0520 23:44:46.266349 319 log.go:172] (0xc000b01340) Reply frame received for 5\nI0520 23:44:46.335159 319 log.go:172] (0xc000b01340) Data frame received for 3\nI0520 23:44:46.335200 319 log.go:172] (0xc000808c80) (3) Data frame handling\nI0520 23:44:46.335259 319 log.go:172] (0xc000b01340) Data frame received for 5\nI0520 23:44:46.335318 319 log.go:172] (0xc0007fa500) (5) Data frame handling\nI0520 23:44:46.335351 319 log.go:172] (0xc0007fa500) (5) Data frame sent\nI0520 23:44:46.335378 319 log.go:172] (0xc000b01340) Data frame received for 5\nI0520 23:44:46.335413 319 log.go:172] (0xc0007fa500) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0520 23:44:46.337384 319 log.go:172] (0xc000b01340) Data frame received for 1\nI0520 23:44:46.337404 319 log.go:172] (0xc00081dea0) (1) Data frame handling\nI0520 23:44:46.337416 319 log.go:172] (0xc00081dea0) (1) Data frame sent\nI0520 23:44:46.337427 319 log.go:172] (0xc000b01340) (0xc00081dea0) Stream removed, broadcasting: 1\nI0520 23:44:46.337519 319 log.go:172] (0xc000b01340) Go away received\nI0520 23:44:46.337809 319 log.go:172] (0xc000b01340) (0xc00081dea0) Stream removed, broadcasting: 1\nI0520 23:44:46.337825 319 log.go:172] (0xc000b01340) (0xc000808c80) Stream removed, broadcasting: 3\nI0520 23:44:46.337833 319 log.go:172] (0xc000b01340) (0xc0007fa500) Stream removed, broadcasting: 5\n" May 20 23:44:46.341: INFO: stdout: "" May 20 23:44:46.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 execpod-affinitylsdzr -- /bin/sh -x -c nc -zv -t -w 2 10.104.179.90 80' May 20 23:44:46.563: INFO: stderr: "I0520 23:44:46.479765 338 log.go:172] (0xc000bf0dc0) (0xc000537b80) Create stream\nI0520 23:44:46.479814 338 log.go:172] (0xc000bf0dc0) (0xc000537b80) Stream added, broadcasting: 1\nI0520 23:44:46.485627 338 log.go:172] (0xc000bf0dc0) Reply frame received for 1\nI0520 23:44:46.485750 338 log.go:172] (0xc000bf0dc0) (0xc0003748c0) Create stream\nI0520 23:44:46.485816 338 log.go:172] (0xc000bf0dc0) (0xc0003748c0) Stream added, broadcasting: 3\nI0520 23:44:46.488510 338 log.go:172] (0xc000bf0dc0) Reply frame received for 3\nI0520 23:44:46.488587 338 log.go:172] (0xc000bf0dc0) (0xc0002266e0) Create stream\nI0520 23:44:46.488610 338 log.go:172] (0xc000bf0dc0) (0xc0002266e0) Stream added, broadcasting: 5\nI0520 23:44:46.492605 338 log.go:172] (0xc000bf0dc0) Reply frame received for 5\nI0520 23:44:46.556332 338 log.go:172] (0xc000bf0dc0) Data frame received for 3\nI0520 23:44:46.556371 338 log.go:172] (0xc0003748c0) (3) Data frame handling\nI0520 23:44:46.556396 338 log.go:172] (0xc000bf0dc0) Data frame received for 5\nI0520 23:44:46.556406 338 log.go:172] (0xc0002266e0) (5) Data frame handling\nI0520 23:44:46.556415 338 log.go:172] (0xc0002266e0) (5) Data frame sent\nI0520 23:44:46.556424 338 log.go:172] (0xc000bf0dc0) Data frame received for 5\nI0520 23:44:46.556435 338 log.go:172] (0xc0002266e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.179.90 80\nConnection to 10.104.179.90 80 port [tcp/http] succeeded!\nI0520 23:44:46.558397 338 log.go:172] (0xc000bf0dc0) Data frame received for 1\nI0520 23:44:46.558433 338 log.go:172] (0xc000537b80) (1) Data frame handling\nI0520 23:44:46.558470 338 log.go:172] (0xc000537b80) (1) Data frame sent\nI0520 23:44:46.558492 338 log.go:172] (0xc000bf0dc0) (0xc000537b80) Stream removed, broadcasting: 1\nI0520 23:44:46.558548 338 log.go:172] (0xc000bf0dc0) Go away received\nI0520 23:44:46.558948 338 log.go:172] (0xc000bf0dc0) (0xc000537b80) Stream removed, broadcasting: 1\nI0520 23:44:46.558965 338 log.go:172] (0xc000bf0dc0) (0xc0003748c0) Stream removed, broadcasting: 3\nI0520 23:44:46.558974 338 log.go:172] (0xc000bf0dc0) (0xc0002266e0) Stream removed, broadcasting: 5\n" May 20 23:44:46.563: INFO: stdout: "" May 20 23:44:46.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 execpod-affinitylsdzr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31334' May 20 23:44:46.789: INFO: stderr: "I0520 23:44:46.699959 358 log.go:172] (0xc000b453f0) (0xc00085afa0) Create stream\nI0520 23:44:46.700021 358 log.go:172] (0xc000b453f0) (0xc00085afa0) Stream added, broadcasting: 1\nI0520 23:44:46.703673 358 log.go:172] (0xc000b453f0) Reply frame received for 1\nI0520 23:44:46.703718 358 log.go:172] (0xc000b453f0) (0xc000853c20) Create stream\nI0520 23:44:46.703732 358 log.go:172] (0xc000b453f0) (0xc000853c20) Stream added, broadcasting: 3\nI0520 23:44:46.704742 358 log.go:172] (0xc000b453f0) Reply frame received for 3\nI0520 23:44:46.704809 358 log.go:172] (0xc000b453f0) (0xc000848d20) Create stream\nI0520 23:44:46.704842 358 log.go:172] (0xc000b453f0) (0xc000848d20) Stream added, broadcasting: 5\nI0520 23:44:46.706233 358 log.go:172] (0xc000b453f0) Reply frame received for 5\nI0520 23:44:46.781305 358 log.go:172] (0xc000b453f0) Data frame received for 5\nI0520 23:44:46.781341 358 log.go:172] (0xc000848d20) (5) Data frame handling\nI0520 23:44:46.781363 358 log.go:172] (0xc000848d20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31334\nConnection to 172.17.0.13 31334 port [tcp/31334] succeeded!\nI0520 23:44:46.781394 358 log.go:172] (0xc000b453f0) Data frame received for 3\nI0520 23:44:46.781415 358 log.go:172] (0xc000853c20) (3) Data frame handling\nI0520 23:44:46.781642 358 log.go:172] (0xc000b453f0) Data frame received for 5\nI0520 23:44:46.781662 358 log.go:172] (0xc000848d20) (5) Data frame handling\nI0520 23:44:46.783152 358 log.go:172] (0xc000b453f0) Data frame received for 1\nI0520 23:44:46.783179 358 log.go:172] (0xc00085afa0) (1) Data frame handling\nI0520 23:44:46.783195 358 log.go:172] (0xc00085afa0) (1) Data frame sent\nI0520 23:44:46.783209 358 log.go:172] (0xc000b453f0) (0xc00085afa0) Stream removed, broadcasting: 1\nI0520 23:44:46.783304 358 log.go:172] (0xc000b453f0) Go away received\nI0520 23:44:46.783696 358 log.go:172] (0xc000b453f0) (0xc00085afa0) Stream removed, broadcasting: 1\nI0520 23:44:46.783712 358 log.go:172] (0xc000b453f0) (0xc000853c20) Stream removed, broadcasting: 3\nI0520 23:44:46.783725 358 log.go:172] (0xc000b453f0) (0xc000848d20) Stream removed, broadcasting: 5\n" May 20 23:44:46.789: INFO: stdout: "" May 20 23:44:46.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 execpod-affinitylsdzr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31334' May 20 23:44:46.989: INFO: stderr: "I0520 23:44:46.922553 378 log.go:172] (0xc000b93130) (0xc0006972c0) Create stream\nI0520 23:44:46.922732 378 log.go:172] (0xc000b93130) (0xc0006972c0) Stream added, broadcasting: 1\nI0520 23:44:46.928748 378 log.go:172] (0xc000b93130) Reply frame received for 1\nI0520 23:44:46.928795 378 log.go:172] (0xc000b93130) (0xc00069ef00) Create stream\nI0520 23:44:46.928808 378 log.go:172] (0xc000b93130) (0xc00069ef00) Stream added, broadcasting: 3\nI0520 23:44:46.929731 378 log.go:172] (0xc000b93130) Reply frame received for 3\nI0520 23:44:46.929754 378 log.go:172] (0xc000b93130) (0xc00067c500) Create stream\nI0520 23:44:46.929761 378 log.go:172] (0xc000b93130) (0xc00067c500) Stream added, broadcasting: 5\nI0520 23:44:46.930519 378 log.go:172] (0xc000b93130) Reply frame received for 5\nI0520 23:44:46.980006 378 log.go:172] (0xc000b93130) Data frame received for 5\nI0520 23:44:46.980057 378 log.go:172] (0xc00067c500) (5) Data frame handling\nI0520 23:44:46.980099 378 log.go:172] (0xc00067c500) (5) Data frame sent\nI0520 23:44:46.980131 378 log.go:172] (0xc000b93130) Data frame received for 5\nI0520 23:44:46.980148 378 log.go:172] (0xc00067c500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31334\nConnection to 172.17.0.12 31334 port [tcp/31334] succeeded!\nI0520 23:44:46.980213 378 log.go:172] (0xc00067c500) (5) Data frame sent\nI0520 23:44:46.980493 378 log.go:172] (0xc000b93130) Data frame received for 3\nI0520 23:44:46.980512 378 log.go:172] (0xc00069ef00) (3) Data frame handling\nI0520 23:44:46.980678 378 log.go:172] (0xc000b93130) Data frame received for 5\nI0520 23:44:46.980703 378 log.go:172] (0xc00067c500) (5) Data frame handling\nI0520 23:44:46.982280 378 log.go:172] (0xc000b93130) Data frame received for 1\nI0520 23:44:46.982299 378 log.go:172] (0xc0006972c0) (1) Data frame handling\nI0520 23:44:46.982315 378 log.go:172] (0xc0006972c0) (1) Data frame sent\nI0520 23:44:46.982408 378 log.go:172] (0xc000b93130) (0xc0006972c0) Stream removed, broadcasting: 1\nI0520 23:44:46.982592 378 log.go:172] (0xc000b93130) Go away received\nI0520 23:44:46.982823 378 log.go:172] (0xc000b93130) (0xc0006972c0) Stream removed, broadcasting: 1\nI0520 23:44:46.982840 378 log.go:172] (0xc000b93130) (0xc00069ef00) Stream removed, broadcasting: 3\nI0520 23:44:46.982849 378 log.go:172] (0xc000b93130) (0xc00067c500) Stream removed, broadcasting: 5\n" May 20 23:44:46.989: INFO: stdout: "" May 20 23:44:46.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 execpod-affinitylsdzr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31334/ ; done' May 20 23:44:47.269: INFO: stderr: "I0520 23:44:47.117100 398 log.go:172] (0xc000ab4000) (0xc000967040) Create stream\nI0520 23:44:47.117316 398 log.go:172] (0xc000ab4000) (0xc000967040) Stream added, broadcasting: 1\nI0520 23:44:47.120052 398 log.go:172] (0xc000ab4000) Reply frame received for 1\nI0520 23:44:47.120104 398 log.go:172] (0xc000ab4000) (0xc0009628c0) Create stream\nI0520 23:44:47.120123 398 log.go:172] (0xc000ab4000) (0xc0009628c0) Stream added, broadcasting: 3\nI0520 23:44:47.121871 398 log.go:172] (0xc000ab4000) Reply frame received for 3\nI0520 23:44:47.121901 398 log.go:172] (0xc000ab4000) (0xc00096ff40) Create stream\nI0520 23:44:47.121909 398 log.go:172] (0xc000ab4000) (0xc00096ff40) Stream added, broadcasting: 5\nI0520 23:44:47.122906 398 log.go:172] (0xc000ab4000) Reply frame received for 5\nI0520 23:44:47.173882 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.173901 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.173908 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.173956 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.173981 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.174009 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.178648 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.178668 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.178699 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.179284 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.179315 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.179335 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.179462 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.179490 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.179511 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.182924 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.182956 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.182987 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.183383 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.183404 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.183428 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.183444 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.183459 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.183476 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.190823 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.190835 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.190845 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.191427 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.191456 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.191465 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.191492 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.191520 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.191541 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.198441 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.198459 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.198471 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.198938 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.198951 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.198962 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.199033 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.199076 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.199119 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.203829 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.203857 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.203884 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.204355 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.204387 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.204401 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.204417 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.204430 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.204461 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.211304 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.211321 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.211335 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.211918 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.211946 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.211974 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\nI0520 23:44:47.212085 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.212111 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.212121 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.212134 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.212141 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.212153 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.216486 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.216504 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.216517 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.216896 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.216914 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.216924 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.216938 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.216960 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.216981 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.223193 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.223207 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.223217 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.223799 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.223821 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.223830 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.223842 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.223849 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.223860 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.227367 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.227383 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.227417 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.227804 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.227825 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.227841 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.227872 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.227894 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.227909 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.231342 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.231354 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.231365 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.231806 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.231834 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.231847 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.231860 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.231868 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.231880 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.235724 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.235758 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.235790 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.236151 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.236169 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.236185 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.236229 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.236251 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.236272 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.240477 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.240498 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.240518 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.240977 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.241010 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.241031 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.241750 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.241765 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.241775 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.245461 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.245484 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.245502 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.245835 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.245855 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.245882 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.245912 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.245925 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.245940 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.249787 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.249814 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.249837 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.250498 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.250529 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.250544 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.250561 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.250581 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.250601 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.254075 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.254102 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.254122 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.254393 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.254415 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.254449 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.254464 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.254477 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.254500 398 log.go:172] (0xc00096ff40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.259472 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.259489 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.259503 398 log.go:172] (0xc0009628c0) (3) Data frame sent\nI0520 23:44:47.260285 398 log.go:172] (0xc000ab4000) Data frame received for 5\nI0520 23:44:47.260313 398 log.go:172] (0xc00096ff40) (5) Data frame handling\nI0520 23:44:47.260337 398 log.go:172] (0xc000ab4000) Data frame received for 3\nI0520 23:44:47.260366 398 log.go:172] (0xc0009628c0) (3) Data frame handling\nI0520 23:44:47.262311 398 log.go:172] (0xc000ab4000) Data frame received for 1\nI0520 23:44:47.262347 398 log.go:172] (0xc000967040) (1) Data frame handling\nI0520 23:44:47.262359 398 log.go:172] (0xc000967040) (1) Data frame sent\nI0520 23:44:47.262371 398 log.go:172] (0xc000ab4000) (0xc000967040) Stream removed, broadcasting: 1\nI0520 23:44:47.262399 398 log.go:172] (0xc000ab4000) Go away received\nI0520 23:44:47.262750 398 log.go:172] (0xc000ab4000) (0xc000967040) Stream removed, broadcasting: 1\nI0520 23:44:47.262774 398 log.go:172] (0xc000ab4000) (0xc0009628c0) Stream removed, broadcasting: 3\nI0520 23:44:47.262793 398 log.go:172] (0xc000ab4000) (0xc00096ff40) Stream removed, broadcasting: 5\n" May 20 23:44:47.270: INFO: stdout: "\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt\naffinity-nodeport-timeout-xswgt" May 20 23:44:47.270: INFO: Received response from host: May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Received response from host: affinity-nodeport-timeout-xswgt May 20 23:44:47.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 execpod-affinitylsdzr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31334/' May 20 23:44:47.547: INFO: stderr: "I0520 23:44:47.415877 418 log.go:172] (0xc0009931e0) (0xc000b90280) Create stream\nI0520 23:44:47.415931 418 log.go:172] (0xc0009931e0) (0xc000b90280) Stream added, broadcasting: 1\nI0520 23:44:47.419825 418 log.go:172] (0xc0009931e0) Reply frame received for 1\nI0520 23:44:47.419874 418 log.go:172] (0xc0009931e0) (0xc000556280) Create stream\nI0520 23:44:47.419889 418 log.go:172] (0xc0009931e0) (0xc000556280) Stream added, broadcasting: 3\nI0520 23:44:47.420678 418 log.go:172] (0xc0009931e0) Reply frame received for 3\nI0520 23:44:47.420719 418 log.go:172] (0xc0009931e0) (0xc0005445a0) Create stream\nI0520 23:44:47.420738 418 log.go:172] (0xc0009931e0) (0xc0005445a0) Stream added, broadcasting: 5\nI0520 23:44:47.421654 418 log.go:172] (0xc0009931e0) Reply frame received for 5\nI0520 23:44:47.534519 418 log.go:172] (0xc0009931e0) Data frame received for 5\nI0520 23:44:47.534542 418 log.go:172] (0xc0005445a0) (5) Data frame handling\nI0520 23:44:47.534557 418 log.go:172] (0xc0005445a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:44:47.540101 418 log.go:172] (0xc0009931e0) Data frame received for 3\nI0520 23:44:47.540119 418 log.go:172] (0xc000556280) (3) Data frame handling\nI0520 23:44:47.540136 418 log.go:172] (0xc000556280) (3) Data frame sent\nI0520 23:44:47.540782 418 log.go:172] (0xc0009931e0) Data frame received for 3\nI0520 23:44:47.540844 418 log.go:172] (0xc000556280) (3) Data frame handling\nI0520 23:44:47.541054 418 log.go:172] (0xc0009931e0) Data frame received for 5\nI0520 23:44:47.541070 418 log.go:172] (0xc0005445a0) (5) Data frame handling\nI0520 23:44:47.542990 418 log.go:172] (0xc0009931e0) Data frame received for 1\nI0520 23:44:47.543028 418 log.go:172] (0xc000b90280) (1) Data frame handling\nI0520 23:44:47.543049 418 log.go:172] (0xc000b90280) (1) Data frame sent\nI0520 23:44:47.543080 418 log.go:172] (0xc0009931e0) (0xc000b90280) Stream removed, broadcasting: 1\nI0520 23:44:47.543115 418 log.go:172] (0xc0009931e0) Go away received\nI0520 23:44:47.543381 418 log.go:172] (0xc0009931e0) (0xc000b90280) Stream removed, broadcasting: 1\nI0520 23:44:47.543397 418 log.go:172] (0xc0009931e0) (0xc000556280) Stream removed, broadcasting: 3\nI0520 23:44:47.543405 418 log.go:172] (0xc0009931e0) (0xc0005445a0) Stream removed, broadcasting: 5\n" May 20 23:44:47.548: INFO: stdout: "affinity-nodeport-timeout-xswgt" May 20 23:45:02.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1434 execpod-affinitylsdzr -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31334/' May 20 23:45:02.776: INFO: stderr: "I0520 23:45:02.684767 440 log.go:172] (0xc000b88790) (0xc0003043c0) Create stream\nI0520 23:45:02.684823 440 log.go:172] (0xc000b88790) (0xc0003043c0) Stream added, broadcasting: 1\nI0520 23:45:02.688145 440 log.go:172] (0xc000b88790) Reply frame received for 1\nI0520 23:45:02.688182 440 log.go:172] (0xc000b88790) (0xc0005223c0) Create stream\nI0520 23:45:02.688196 440 log.go:172] (0xc000b88790) (0xc0005223c0) Stream added, broadcasting: 3\nI0520 23:45:02.689390 440 log.go:172] (0xc000b88790) Reply frame received for 3\nI0520 23:45:02.689450 440 log.go:172] (0xc000b88790) (0xc0007100a0) Create stream\nI0520 23:45:02.689473 440 log.go:172] (0xc000b88790) (0xc0007100a0) Stream added, broadcasting: 5\nI0520 23:45:02.690785 440 log.go:172] (0xc000b88790) Reply frame received for 5\nI0520 23:45:02.765429 440 log.go:172] (0xc000b88790) Data frame received for 5\nI0520 23:45:02.765471 440 log.go:172] (0xc0007100a0) (5) Data frame handling\nI0520 23:45:02.765506 440 log.go:172] (0xc0007100a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31334/\nI0520 23:45:02.768329 440 log.go:172] (0xc000b88790) Data frame received for 3\nI0520 23:45:02.768367 440 log.go:172] (0xc0005223c0) (3) Data frame handling\nI0520 23:45:02.768392 440 log.go:172] (0xc0005223c0) (3) Data frame sent\nI0520 23:45:02.768996 440 log.go:172] (0xc000b88790) Data frame received for 5\nI0520 23:45:02.769041 440 log.go:172] (0xc0007100a0) (5) Data frame handling\nI0520 23:45:02.769074 440 log.go:172] (0xc000b88790) Data frame received for 3\nI0520 23:45:02.769104 440 log.go:172] (0xc0005223c0) (3) Data frame handling\nI0520 23:45:02.770822 440 log.go:172] (0xc000b88790) Data frame received for 1\nI0520 23:45:02.770846 440 log.go:172] (0xc0003043c0) (1) Data frame handling\nI0520 23:45:02.770858 440 log.go:172] (0xc0003043c0) (1) Data frame sent\nI0520 23:45:02.770873 440 log.go:172] (0xc000b88790) (0xc0003043c0) Stream removed, broadcasting: 1\nI0520 23:45:02.770894 440 log.go:172] (0xc000b88790) Go away received\nI0520 23:45:02.771330 440 log.go:172] (0xc000b88790) (0xc0003043c0) Stream removed, broadcasting: 1\nI0520 23:45:02.771355 440 log.go:172] (0xc000b88790) (0xc0005223c0) Stream removed, broadcasting: 3\nI0520 23:45:02.771366 440 log.go:172] (0xc000b88790) (0xc0007100a0) Stream removed, broadcasting: 5\n" May 20 23:45:02.776: INFO: stdout: "affinity-nodeport-timeout-brlgr" May 20 23:45:02.776: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1434, will wait for the garbage collector to delete the pods May 20 23:45:02.974: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 97.565803ms May 20 23:45:03.374: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.274945ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:45:15.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1434" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:50.930 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":19,"skipped":262,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:45:15.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 20 23:45:15.483: INFO: Waiting up to 5m0s for pod "var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1" in namespace "var-expansion-6265" to be "Succeeded or Failed" May 20 23:45:15.495: INFO: Pod "var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.067334ms May 20 23:45:17.498: INFO: Pod "var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014776941s May 20 23:45:19.503: INFO: Pod "var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019309357s STEP: Saw pod success May 20 23:45:19.503: INFO: Pod "var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1" satisfied condition "Succeeded or Failed" May 20 23:45:19.506: INFO: Trying to get logs from node latest-worker pod var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1 container dapi-container: STEP: delete the pod May 20 23:45:19.561: INFO: Waiting for pod var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1 to disappear May 20 23:45:19.586: INFO: Pod var-expansion-84d94260-2ddb-40a7-9add-244d63beb0f1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:45:19.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6265" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":20,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:45:19.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:45:19.681: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 20 23:45:24.692: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 20 23:45:24.692: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 20 23:45:26.697: INFO: Creating deployment "test-rollover-deployment" May 20 23:45:26.729: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 20 23:45:28.754: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 20 23:45:28.760: INFO: Ensure that both replica sets have 1 created replica May 20 23:45:28.765: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 20 23:45:28.771: INFO: Updating deployment test-rollover-deployment May 20 23:45:28.771: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 20 23:45:30.796: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 20 23:45:30.801: INFO: Make sure deployment "test-rollover-deployment" is complete May 20 23:45:30.808: INFO: all replica sets need to contain the pod-template-hash label May 20 23:45:30.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615129, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 23:45:32.824: INFO: all replica sets need to contain the pod-template-hash label May 20 23:45:32.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615132, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 23:45:34.825: INFO: all replica sets need to contain the pod-template-hash label May 20 23:45:34.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615132, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 23:45:36.815: INFO: all replica sets need to contain the pod-template-hash label May 20 23:45:36.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615132, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 23:45:38.856: INFO: all replica sets need to contain the pod-template-hash label May 20 23:45:38.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615132, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 23:45:40.817: INFO: all replica sets need to contain the pod-template-hash label May 20 23:45:40.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615132, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615126, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 20 23:45:42.815: INFO: May 20 23:45:42.815: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 20 23:45:42.821: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9080 /apis/apps/v1/namespaces/deployment-9080/deployments/test-rollover-deployment a103116a-2f51-492e-b637-cb9cc4445f63 6341707 2 2020-05-20 23:45:26 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-20 23:45:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 23:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00287bf58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-20 23:45:26 +0000 UTC,LastTransitionTime:2020-05-20 23:45:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-20 23:45:42 +0000 UTC,LastTransitionTime:2020-05-20 23:45:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 20 23:45:42.823: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-9080 /apis/apps/v1/namespaces/deployment-9080/replicasets/test-rollover-deployment-7c4fd9c879 25ac4052-28ce-419d-bb7a-ce5e583f39bf 6341696 2 2020-05-20 23:45:28 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a103116a-2f51-492e-b637-cb9cc4445f63 0xc0023894f7 0xc0023894f8}] [] [{kube-controller-manager Update apps/v1 2020-05-20 23:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a103116a-2f51-492e-b637-cb9cc4445f63\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023895f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 20 23:45:42.823: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 20 23:45:42.824: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9080 /apis/apps/v1/namespaces/deployment-9080/replicasets/test-rollover-controller 5d6ca849-cb3d-46c7-8201-28c868b10a1c 6341706 2 2020-05-20 23:45:19 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a103116a-2f51-492e-b637-cb9cc4445f63 0xc00238904f 0xc002389090}] [] [{e2e.test Update apps/v1 2020-05-20 23:45:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 23:45:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a103116a-2f51-492e-b637-cb9cc4445f63\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0023891c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 23:45:42.824: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-9080 /apis/apps/v1/namespaces/deployment-9080/replicasets/test-rollover-deployment-5686c4cfd5 58699af8-8dca-49ef-9514-3bea971e1b7e 6341651 2 2020-05-20 23:45:26 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a103116a-2f51-492e-b637-cb9cc4445f63 0xc0023892c7 0xc0023892c8}] [] [{kube-controller-manager Update apps/v1 2020-05-20 23:45:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a103116a-2f51-492e-b637-cb9cc4445f63\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002389478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 23:45:42.826: INFO: Pod "test-rollover-deployment-7c4fd9c879-c4q4s" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-c4q4s test-rollover-deployment-7c4fd9c879- deployment-9080 /api/v1/namespaces/deployment-9080/pods/test-rollover-deployment-7c4fd9c879-c4q4s 5c53282a-fc46-4aab-b792-03f68c91d91e 6341664 0 2020-05-20 23:45:28 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 25ac4052-28ce-419d-bb7a-ce5e583f39bf 0xc002252387 0xc002252388}] [] [{kube-controller-manager Update v1 2020-05-20 23:45:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25ac4052-28ce-419d-bb7a-ce5e583f39bf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:45:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nfm9g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nfm9g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nfm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:45:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:45:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:45:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:45:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.55,StartTime:2020-05-20 23:45:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:45:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://dc26302dc80ec3e1fe6164021d309615bc21fe759542b55269eb088a48fc919f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:45:42.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9080" for this suite. • [SLOW TEST:23.225 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":21,"skipped":306,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:45:42.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:45:43.190: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 20 23:45:43.197: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:43.243: INFO: Number of nodes with available pods: 0 May 20 23:45:43.243: INFO: Node latest-worker is running more than one daemon pod May 20 23:45:44.249: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:44.253: INFO: Number of nodes with available pods: 0 May 20 23:45:44.253: INFO: Node latest-worker is running more than one daemon pod May 20 23:45:45.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:45.318: INFO: Number of nodes with available pods: 0 May 20 23:45:45.318: INFO: Node latest-worker is running more than one daemon pod May 20 23:45:46.249: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:46.253: INFO: Number of nodes with available pods: 0 May 20 23:45:46.253: INFO: Node latest-worker is running more than one daemon pod May 20 23:45:47.249: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:47.253: INFO: Number of nodes with available pods: 2 May 20 23:45:47.253: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 20 23:45:47.308: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:47.308: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:47.385: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:48.402: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:48.402: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:48.407: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:49.389: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:49.389: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:49.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:50.390: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:50.390: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:50.390: INFO: Pod daemon-set-hlnxq is not available May 20 23:45:50.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:51.389: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:51.389: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:51.389: INFO: Pod daemon-set-hlnxq is not available May 20 23:45:51.393: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:52.390: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:52.390: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:52.390: INFO: Pod daemon-set-hlnxq is not available May 20 23:45:52.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:53.390: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:53.390: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:53.390: INFO: Pod daemon-set-hlnxq is not available May 20 23:45:53.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:54.390: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:54.390: INFO: Wrong image for pod: daemon-set-hlnxq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:54.390: INFO: Pod daemon-set-hlnxq is not available May 20 23:45:54.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:55.388: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:55.388: INFO: Pod daemon-set-mbhkv is not available May 20 23:45:55.444: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:56.389: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:56.389: INFO: Pod daemon-set-mbhkv is not available May 20 23:45:56.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:57.389: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:57.389: INFO: Pod daemon-set-mbhkv is not available May 20 23:45:57.393: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:58.389: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:58.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:45:59.390: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:45:59.390: INFO: Pod daemon-set-8bmt8 is not available May 20 23:45:59.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:00.389: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:46:00.390: INFO: Pod daemon-set-8bmt8 is not available May 20 23:46:00.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:01.390: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:46:01.390: INFO: Pod daemon-set-8bmt8 is not available May 20 23:46:01.394: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:02.391: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:46:02.391: INFO: Pod daemon-set-8bmt8 is not available May 20 23:46:02.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:03.389: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:46:03.389: INFO: Pod daemon-set-8bmt8 is not available May 20 23:46:03.393: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:04.393: INFO: Wrong image for pod: daemon-set-8bmt8. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 20 23:46:04.393: INFO: Pod daemon-set-8bmt8 is not available May 20 23:46:04.397: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:05.390: INFO: Pod daemon-set-g7lmj is not available May 20 23:46:05.395: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 20 23:46:05.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:05.403: INFO: Number of nodes with available pods: 1 May 20 23:46:05.403: INFO: Node latest-worker is running more than one daemon pod May 20 23:46:06.427: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:06.431: INFO: Number of nodes with available pods: 1 May 20 23:46:06.431: INFO: Node latest-worker is running more than one daemon pod May 20 23:46:07.408: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:07.412: INFO: Number of nodes with available pods: 1 May 20 23:46:07.412: INFO: Node latest-worker is running more than one daemon pod May 20 23:46:08.408: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:46:08.412: INFO: Number of nodes with available pods: 2 May 20 23:46:08.412: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6846, will wait for the garbage collector to delete the pods May 20 23:46:08.487: INFO: Deleting DaemonSet.extensions daemon-set took: 6.591586ms May 20 23:46:08.788: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.226417ms May 20 23:46:15.291: INFO: Number of nodes with available pods: 0 May 20 23:46:15.291: INFO: Number of running nodes: 0, number of available pods: 0 May 20 23:46:15.293: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6846/daemonsets","resourceVersion":"6341914"},"items":null} May 20 23:46:15.295: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6846/pods","resourceVersion":"6341914"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:15.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6846" for this suite. • [SLOW TEST:32.477 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":22,"skipped":307,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:15.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:46:15.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235" in namespace "downward-api-6547" to be "Succeeded or Failed" May 20 23:46:15.414: INFO: Pod "downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305034ms May 20 23:46:17.418: INFO: Pod "downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007878807s May 20 23:46:19.421: INFO: Pod "downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011698145s STEP: Saw pod success May 20 23:46:19.421: INFO: Pod "downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235" satisfied condition "Succeeded or Failed" May 20 23:46:19.424: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235 container client-container: STEP: delete the pod May 20 23:46:19.490: INFO: Waiting for pod downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235 to disappear May 20 23:46:19.581: INFO: Pod downwardapi-volume-3b90644c-4c76-4e45-ab52-e38effdca235 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:19.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6547" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":314,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:19.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 20 23:46:19.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6740' May 20 23:46:20.031: INFO: stderr: "" May 20 23:46:20.031: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 20 23:46:21.035: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:46:21.036: INFO: Found 0 / 1 May 20 23:46:22.035: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:46:22.035: INFO: Found 0 / 1 May 20 23:46:23.036: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:46:23.036: INFO: Found 0 / 1 May 20 23:46:24.036: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:46:24.036: INFO: Found 1 / 1 May 20 23:46:24.036: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 20 23:46:24.039: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:46:24.039: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 23:46:24.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-st7mq --namespace=kubectl-6740 -p {"metadata":{"annotations":{"x":"y"}}}' May 20 23:46:24.154: INFO: stderr: "" May 20 23:46:24.154: INFO: stdout: "pod/agnhost-master-st7mq patched\n" STEP: checking annotations May 20 23:46:24.178: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:46:24.178: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:24.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6740" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":24,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:24.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-360" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":25,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:28.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 20 23:46:28.494: INFO: Waiting up to 5m0s for pod "pod-b896038a-bf62-460b-a7c6-8708aafc3783" in namespace "emptydir-5473" to be "Succeeded or Failed" May 20 23:46:28.522: INFO: Pod "pod-b896038a-bf62-460b-a7c6-8708aafc3783": Phase="Pending", Reason="", readiness=false. Elapsed: 28.15927ms May 20 23:46:30.526: INFO: Pod "pod-b896038a-bf62-460b-a7c6-8708aafc3783": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031919786s May 20 23:46:32.531: INFO: Pod "pod-b896038a-bf62-460b-a7c6-8708aafc3783": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03698763s STEP: Saw pod success May 20 23:46:32.531: INFO: Pod "pod-b896038a-bf62-460b-a7c6-8708aafc3783" satisfied condition "Succeeded or Failed" May 20 23:46:32.535: INFO: Trying to get logs from node latest-worker pod pod-b896038a-bf62-460b-a7c6-8708aafc3783 container test-container: STEP: delete the pod May 20 23:46:32.575: INFO: Waiting for pod pod-b896038a-bf62-460b-a7c6-8708aafc3783 to disappear May 20 23:46:32.641: INFO: Pod pod-b896038a-bf62-460b-a7c6-8708aafc3783 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:32.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5473" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":26,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:32.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:46:32.727: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:33.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1432" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":27,"skipped":468,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:33.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:33.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-257" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":28,"skipped":553,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:34.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:46:34.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d" in namespace "downward-api-992" to be "Succeeded or Failed" May 20 23:46:34.934: INFO: Pod "downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d": Phase="Pending", Reason="", readiness=false. Elapsed: 230.881746ms May 20 23:46:36.939: INFO: Pod "downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234939632s May 20 23:46:38.941: INFO: Pod "downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237753455s STEP: Saw pod success May 20 23:46:38.941: INFO: Pod "downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d" satisfied condition "Succeeded or Failed" May 20 23:46:38.944: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d container client-container: STEP: delete the pod May 20 23:46:39.034: INFO: Waiting for pod downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d to disappear May 20 23:46:39.043: INFO: Pod downwardapi-volume-a6bb00cf-e2ee-41ae-86e9-18a57d64ab1d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:46:39.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-992" for this suite. • [SLOW TEST:5.028 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":29,"skipped":577,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:46:39.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 20 23:46:39.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1099' May 20 23:46:39.469: INFO: stderr: "" May 20 23:46:39.469: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 23:46:39.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1099' May 20 23:46:39.595: INFO: stderr: "" May 20 23:46:39.595: INFO: stdout: "update-demo-nautilus-grc2r update-demo-nautilus-kx6w5 " May 20 23:46:39.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grc2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:39.702: INFO: stderr: "" May 20 23:46:39.702: INFO: stdout: "" May 20 23:46:39.702: INFO: update-demo-nautilus-grc2r is created but not running May 20 23:46:44.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1099' May 20 23:46:44.859: INFO: stderr: "" May 20 23:46:44.859: INFO: stdout: "update-demo-nautilus-grc2r update-demo-nautilus-kx6w5 " May 20 23:46:44.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grc2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:44.956: INFO: stderr: "" May 20 23:46:44.956: INFO: stdout: "true" May 20 23:46:44.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-grc2r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:45.050: INFO: stderr: "" May 20 23:46:45.050: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 23:46:45.050: INFO: validating pod update-demo-nautilus-grc2r May 20 23:46:45.065: INFO: got data: { "image": "nautilus.jpg" } May 20 23:46:45.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 23:46:45.065: INFO: update-demo-nautilus-grc2r is verified up and running May 20 23:46:45.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx6w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:45.177: INFO: stderr: "" May 20 23:46:45.177: INFO: stdout: "true" May 20 23:46:45.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx6w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:45.287: INFO: stderr: "" May 20 23:46:45.287: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 23:46:45.287: INFO: validating pod update-demo-nautilus-kx6w5 May 20 23:46:45.305: INFO: got data: { "image": "nautilus.jpg" } May 20 23:46:45.305: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 23:46:45.305: INFO: update-demo-nautilus-kx6w5 is verified up and running STEP: scaling down the replication controller May 20 23:46:45.308: INFO: scanned /root for discovery docs: May 20 23:46:45.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1099' May 20 23:46:46.439: INFO: stderr: "" May 20 23:46:46.439: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 23:46:46.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1099' May 20 23:46:46.547: INFO: stderr: "" May 20 23:46:46.547: INFO: stdout: "update-demo-nautilus-grc2r update-demo-nautilus-kx6w5 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 20 23:46:51.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1099' May 20 23:46:51.661: INFO: stderr: "" May 20 23:46:51.661: INFO: stdout: "update-demo-nautilus-grc2r update-demo-nautilus-kx6w5 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 20 23:46:56.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1099' May 20 23:46:56.761: INFO: stderr: "" May 20 23:46:56.761: INFO: stdout: "update-demo-nautilus-kx6w5 " May 20 23:46:56.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx6w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:56.861: INFO: stderr: "" May 20 23:46:56.861: INFO: stdout: "true" May 20 23:46:56.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx6w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:56.959: INFO: stderr: "" May 20 23:46:56.959: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 23:46:56.959: INFO: validating pod update-demo-nautilus-kx6w5 May 20 23:46:56.962: INFO: got data: { "image": "nautilus.jpg" } May 20 23:46:56.963: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 23:46:56.963: INFO: update-demo-nautilus-kx6w5 is verified up and running STEP: scaling up the replication controller May 20 23:46:56.965: INFO: scanned /root for discovery docs: May 20 23:46:56.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1099' May 20 23:46:58.160: INFO: stderr: "" May 20 23:46:58.160: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 23:46:58.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1099' May 20 23:46:58.277: INFO: stderr: "" May 20 23:46:58.277: INFO: stdout: "update-demo-nautilus-9v76m update-demo-nautilus-kx6w5 " May 20 23:46:58.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9v76m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:46:58.391: INFO: stderr: "" May 20 23:46:58.391: INFO: stdout: "" May 20 23:46:58.391: INFO: update-demo-nautilus-9v76m is created but not running May 20 23:47:03.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1099' May 20 23:47:03.504: INFO: stderr: "" May 20 23:47:03.504: INFO: stdout: "update-demo-nautilus-9v76m update-demo-nautilus-kx6w5 " May 20 23:47:03.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9v76m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:47:03.621: INFO: stderr: "" May 20 23:47:03.621: INFO: stdout: "true" May 20 23:47:03.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9v76m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:47:03.725: INFO: stderr: "" May 20 23:47:03.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 23:47:03.725: INFO: validating pod update-demo-nautilus-9v76m May 20 23:47:03.728: INFO: got data: { "image": "nautilus.jpg" } May 20 23:47:03.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 23:47:03.728: INFO: update-demo-nautilus-9v76m is verified up and running May 20 23:47:03.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx6w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:47:03.832: INFO: stderr: "" May 20 23:47:03.832: INFO: stdout: "true" May 20 23:47:03.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kx6w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1099' May 20 23:47:03.936: INFO: stderr: "" May 20 23:47:03.936: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 23:47:03.936: INFO: validating pod update-demo-nautilus-kx6w5 May 20 23:47:03.940: INFO: got data: { "image": "nautilus.jpg" } May 20 23:47:03.940: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 23:47:03.940: INFO: update-demo-nautilus-kx6w5 is verified up and running STEP: using delete to clean up resources May 20 23:47:03.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1099' May 20 23:47:04.042: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:47:04.042: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 23:47:04.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1099' May 20 23:47:04.132: INFO: stderr: "No resources found in kubectl-1099 namespace.\n" May 20 23:47:04.132: INFO: stdout: "" May 20 23:47:04.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1099 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 23:47:04.244: INFO: stderr: "" May 20 23:47:04.244: INFO: stdout: "update-demo-nautilus-9v76m\nupdate-demo-nautilus-kx6w5\n" May 20 23:47:04.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1099' May 20 23:47:04.858: INFO: stderr: "No resources found in kubectl-1099 namespace.\n" May 20 23:47:04.858: INFO: stdout: "" May 20 23:47:04.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1099 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 23:47:04.965: INFO: stderr: "" May 20 23:47:04.965: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:47:04.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1099" for this suite. • [SLOW TEST:25.922 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":30,"skipped":580,"failed":0} [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:47:04.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 20 23:47:05.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-3294 -- logs-generator --log-lines-total 100 --run-duration 20s' May 20 23:47:05.246: INFO: stderr: "" May 20 23:47:05.246: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 20 23:47:05.246: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 20 23:47:05.246: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3294" to be "running and ready, or succeeded" May 20 23:47:05.302: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 56.481403ms May 20 23:47:07.457: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210554179s May 20 23:47:09.461: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.215064968s May 20 23:47:09.461: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 20 23:47:09.461: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 20 23:47:09.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3294' May 20 23:47:09.568: INFO: stderr: "" May 20 23:47:09.568: INFO: stdout: "I0520 23:47:08.195375 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/bhc 463\nI0520 23:47:08.395514 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/7lv7 357\nI0520 23:47:08.595547 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/r4j 244\nI0520 23:47:08.795586 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/5lh5 283\nI0520 23:47:08.995524 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/lj8f 438\nI0520 23:47:09.195557 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/7rx 289\nI0520 23:47:09.395544 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/b9xq 275\n" STEP: limiting log lines May 20 23:47:09.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3294 --tail=1' May 20 23:47:09.670: INFO: stderr: "" May 20 23:47:09.670: INFO: stdout: "I0520 23:47:09.595546 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/xzd 369\n" May 20 23:47:09.670: INFO: got output "I0520 23:47:09.595546 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/xzd 369\n" STEP: limiting log bytes May 20 23:47:09.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3294 --limit-bytes=1' May 20 23:47:09.775: INFO: stderr: "" May 20 23:47:09.775: INFO: stdout: "I" May 20 23:47:09.775: INFO: got output "I" STEP: exposing timestamps May 20 23:47:09.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3294 --tail=1 --timestamps' May 20 23:47:09.905: INFO: stderr: "" May 20 23:47:09.905: INFO: stdout: "2020-05-20T23:47:09.795697812Z I0520 23:47:09.795544 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/qcw 492\n" May 20 23:47:09.905: INFO: got output "2020-05-20T23:47:09.795697812Z I0520 23:47:09.795544 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/qcw 492\n" STEP: restricting to a time range May 20 23:47:12.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3294 --since=1s' May 20 23:47:12.523: INFO: stderr: "" May 20 23:47:12.523: INFO: stdout: "I0520 23:47:11.595568 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/mjfc 469\nI0520 23:47:11.795560 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/f6d 323\nI0520 23:47:11.995546 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/4h8j 500\nI0520 23:47:12.195591 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/dr26 357\nI0520 23:47:12.395568 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/9xrd 272\n" May 20 23:47:12.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3294 --since=24h' May 20 23:47:12.644: INFO: stderr: "" May 20 23:47:12.644: INFO: stdout: "I0520 23:47:08.195375 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/bhc 463\nI0520 23:47:08.395514 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/7lv7 357\nI0520 23:47:08.595547 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/r4j 244\nI0520 23:47:08.795586 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/5lh5 283\nI0520 23:47:08.995524 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/lj8f 438\nI0520 23:47:09.195557 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/7rx 289\nI0520 23:47:09.395544 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/b9xq 275\nI0520 23:47:09.595546 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/xzd 369\nI0520 23:47:09.795544 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/qcw 492\nI0520 23:47:09.995568 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/vkw2 573\nI0520 23:47:10.195597 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/47wh 205\nI0520 23:47:10.395537 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/fs6z 263\nI0520 23:47:10.595573 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/kbn 291\nI0520 23:47:10.795594 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/6pq5 561\nI0520 23:47:10.995600 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/cjhc 460\nI0520 23:47:11.195508 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/7gx9 591\nI0520 23:47:11.395572 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/rdc 446\nI0520 23:47:11.595568 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/mjfc 469\nI0520 23:47:11.795560 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/f6d 323\nI0520 23:47:11.995546 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/4h8j 500\nI0520 23:47:12.195591 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/dr26 357\nI0520 23:47:12.395568 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/9xrd 272\nI0520 23:47:12.595596 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/dxjf 384\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 20 23:47:12.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3294' May 20 23:47:15.281: INFO: stderr: "" May 20 23:47:15.281: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:47:15.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3294" for this suite. • [SLOW TEST:10.323 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":31,"skipped":580,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:47:15.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-416c038c-31da-4242-8fdc-94c46d26e085 STEP: Creating a pod to test consume secrets May 20 23:47:15.382: INFO: Waiting up to 5m0s for pod "pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd" in namespace "secrets-230" to be "Succeeded or Failed" May 20 23:47:15.394: INFO: Pod "pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.177545ms May 20 23:47:17.398: INFO: Pod "pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016222272s May 20 23:47:19.403: INFO: Pod "pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020765494s STEP: Saw pod success May 20 23:47:19.403: INFO: Pod "pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd" satisfied condition "Succeeded or Failed" May 20 23:47:19.406: INFO: Trying to get logs from node latest-worker pod pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd container secret-env-test: STEP: delete the pod May 20 23:47:19.439: INFO: Waiting for pod pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd to disappear May 20 23:47:19.445: INFO: Pod pod-secrets-282aed5e-4f1f-45a8-bb69-2f4aafeae9bd no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:47:19.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-230" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":32,"skipped":596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:47:19.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:47:19.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5928" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":33,"skipped":693,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:47:19.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 20 23:47:19.639: INFO: Waiting up to 5m0s for pod "pod-91051757-f8c5-43a4-a203-f9180b0516c0" in namespace "emptydir-7507" to be "Succeeded or Failed" May 20 23:47:19.643: INFO: Pod "pod-91051757-f8c5-43a4-a203-f9180b0516c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.899478ms May 20 23:47:21.648: INFO: Pod "pod-91051757-f8c5-43a4-a203-f9180b0516c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008893687s May 20 23:47:23.653: INFO: Pod "pod-91051757-f8c5-43a4-a203-f9180b0516c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013565604s STEP: Saw pod success May 20 23:47:23.653: INFO: Pod "pod-91051757-f8c5-43a4-a203-f9180b0516c0" satisfied condition "Succeeded or Failed" May 20 23:47:23.656: INFO: Trying to get logs from node latest-worker pod pod-91051757-f8c5-43a4-a203-f9180b0516c0 container test-container: STEP: delete the pod May 20 23:47:23.698: INFO: Waiting for pod pod-91051757-f8c5-43a4-a203-f9180b0516c0 to disappear May 20 23:47:23.721: INFO: Pod pod-91051757-f8c5-43a4-a203-f9180b0516c0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:47:23.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7507" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:47:23.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6504 STEP: creating a selector STEP: Creating the service pods in kubernetes May 20 23:47:23.795: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 20 23:47:23.889: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:47:25.893: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 20 23:47:27.892: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:47:29.893: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:47:31.894: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:47:33.894: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:47:35.894: INFO: The status of Pod netserver-0 is Running (Ready = false) May 20 23:47:37.894: INFO: The status of Pod netserver-0 is Running (Ready = true) May 20 23:47:37.900: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 23:47:39.904: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 23:47:41.904: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 23:47:43.904: INFO: The status of Pod netserver-1 is Running (Ready = false) May 20 23:47:45.930: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 20 23:47:49.958: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.79:8080/dial?request=hostname&protocol=http&host=10.244.1.64&port=8080&tries=1'] Namespace:pod-network-test-6504 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 23:47:49.958: INFO: >>> kubeConfig: /root/.kube/config I0520 23:47:49.990887 8 log.go:172] (0xc002371340) (0xc0018c4780) Create stream I0520 23:47:49.990939 8 log.go:172] (0xc002371340) (0xc0018c4780) Stream added, broadcasting: 1 I0520 23:47:49.993319 8 log.go:172] (0xc002371340) Reply frame received for 1 I0520 23:47:49.993372 8 log.go:172] (0xc002371340) (0xc0018c4820) Create stream I0520 23:47:49.993387 8 log.go:172] (0xc002371340) (0xc0018c4820) Stream added, broadcasting: 3 I0520 23:47:49.994380 8 log.go:172] (0xc002371340) Reply frame received for 3 I0520 23:47:49.994426 8 log.go:172] (0xc002371340) (0xc0025bde00) Create stream I0520 23:47:49.994441 8 log.go:172] (0xc002371340) (0xc0025bde00) Stream added, broadcasting: 5 I0520 23:47:49.995436 8 log.go:172] (0xc002371340) Reply frame received for 5 I0520 23:47:50.085344 8 log.go:172] (0xc002371340) Data frame received for 3 I0520 23:47:50.085372 8 log.go:172] (0xc0018c4820) (3) Data frame handling I0520 23:47:50.085380 8 log.go:172] (0xc0018c4820) (3) Data frame sent I0520 23:47:50.086260 8 log.go:172] (0xc002371340) Data frame received for 3 I0520 23:47:50.086310 8 log.go:172] (0xc0018c4820) (3) Data frame handling I0520 23:47:50.086340 8 log.go:172] (0xc002371340) Data frame received for 5 I0520 23:47:50.086356 8 log.go:172] (0xc0025bde00) (5) Data frame handling I0520 23:47:50.103305 8 log.go:172] (0xc002371340) Data frame received for 1 I0520 23:47:50.103417 8 log.go:172] (0xc0018c4780) (1) Data frame handling I0520 23:47:50.103476 8 log.go:172] (0xc0018c4780) (1) Data frame sent I0520 23:47:50.103832 8 log.go:172] (0xc002371340) (0xc0018c4780) Stream removed, broadcasting: 1 I0520 23:47:50.104215 8 log.go:172] (0xc002371340) (0xc0018c4780) Stream removed, broadcasting: 1 I0520 23:47:50.104275 8 log.go:172] (0xc002371340) (0xc0018c4820) Stream removed, broadcasting: 3 I0520 23:47:50.104313 8 log.go:172] (0xc002371340) (0xc0025bde00) Stream removed, broadcasting: 5 I0520 23:47:50.104379 8 log.go:172] (0xc002371340) Go away received May 20 23:47:50.104: INFO: Waiting for responses: map[] May 20 23:47:50.108: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.79:8080/dial?request=hostname&protocol=http&host=10.244.2.78&port=8080&tries=1'] Namespace:pod-network-test-6504 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 23:47:50.108: INFO: >>> kubeConfig: /root/.kube/config I0520 23:47:50.129524 8 log.go:172] (0xc002371a20) (0xc0018c4dc0) Create stream I0520 23:47:50.129551 8 log.go:172] (0xc002371a20) (0xc0018c4dc0) Stream added, broadcasting: 1 I0520 23:47:50.130980 8 log.go:172] (0xc002371a20) Reply frame received for 1 I0520 23:47:50.131018 8 log.go:172] (0xc002371a20) (0xc0018c4e60) Create stream I0520 23:47:50.131027 8 log.go:172] (0xc002371a20) (0xc0018c4e60) Stream added, broadcasting: 3 I0520 23:47:50.131741 8 log.go:172] (0xc002371a20) Reply frame received for 3 I0520 23:47:50.131770 8 log.go:172] (0xc002371a20) (0xc002b292c0) Create stream I0520 23:47:50.131778 8 log.go:172] (0xc002371a20) (0xc002b292c0) Stream added, broadcasting: 5 I0520 23:47:50.132489 8 log.go:172] (0xc002371a20) Reply frame received for 5 I0520 23:47:50.203947 8 log.go:172] (0xc002371a20) Data frame received for 3 I0520 23:47:50.204029 8 log.go:172] (0xc0018c4e60) (3) Data frame handling I0520 23:47:50.204063 8 log.go:172] (0xc0018c4e60) (3) Data frame sent I0520 23:47:50.204459 8 log.go:172] (0xc002371a20) Data frame received for 5 I0520 23:47:50.204505 8 log.go:172] (0xc002b292c0) (5) Data frame handling I0520 23:47:50.204586 8 log.go:172] (0xc002371a20) Data frame received for 3 I0520 23:47:50.204612 8 log.go:172] (0xc0018c4e60) (3) Data frame handling I0520 23:47:50.206429 8 log.go:172] (0xc002371a20) Data frame received for 1 I0520 23:47:50.206455 8 log.go:172] (0xc0018c4dc0) (1) Data frame handling I0520 23:47:50.206480 8 log.go:172] (0xc0018c4dc0) (1) Data frame sent I0520 23:47:50.206502 8 log.go:172] (0xc002371a20) (0xc0018c4dc0) Stream removed, broadcasting: 1 I0520 23:47:50.206530 8 log.go:172] (0xc002371a20) Go away received I0520 23:47:50.206618 8 log.go:172] (0xc002371a20) (0xc0018c4dc0) Stream removed, broadcasting: 1 I0520 23:47:50.206641 8 log.go:172] (0xc002371a20) (0xc0018c4e60) Stream removed, broadcasting: 3 I0520 23:47:50.206677 8 log.go:172] (0xc002371a20) (0xc002b292c0) Stream removed, broadcasting: 5 May 20 23:47:50.206: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:47:50.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6504" for this suite. • [SLOW TEST:26.488 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":735,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:47:50.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 20 23:47:50.349: INFO: Waiting up to 5m0s for pod "pod-b4078144-1220-4589-9070-13df3f98309a" in namespace "emptydir-8131" to be "Succeeded or Failed" May 20 23:47:50.352: INFO: Pod "pod-b4078144-1220-4589-9070-13df3f98309a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.625777ms May 20 23:47:52.356: INFO: Pod "pod-b4078144-1220-4589-9070-13df3f98309a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007230333s May 20 23:47:54.360: INFO: Pod "pod-b4078144-1220-4589-9070-13df3f98309a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0113116s STEP: Saw pod success May 20 23:47:54.360: INFO: Pod "pod-b4078144-1220-4589-9070-13df3f98309a" satisfied condition "Succeeded or Failed" May 20 23:47:54.363: INFO: Trying to get logs from node latest-worker2 pod pod-b4078144-1220-4589-9070-13df3f98309a container test-container: STEP: delete the pod May 20 23:47:54.435: INFO: Waiting for pod pod-b4078144-1220-4589-9070-13df3f98309a to disappear May 20 23:47:54.459: INFO: Pod pod-b4078144-1220-4589-9070-13df3f98309a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:47:54.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8131" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":36,"skipped":754,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:47:54.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:47:54.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13" in namespace "projected-3428" to be "Succeeded or Failed" May 20 23:47:54.651: INFO: Pod "downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13": Phase="Pending", Reason="", readiness=false. Elapsed: 66.258204ms May 20 23:47:56.852: INFO: Pod "downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267170681s May 20 23:47:58.856: INFO: Pod "downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13": Phase="Running", Reason="", readiness=true. Elapsed: 4.271391474s May 20 23:48:00.860: INFO: Pod "downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.275677478s STEP: Saw pod success May 20 23:48:00.860: INFO: Pod "downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13" satisfied condition "Succeeded or Failed" May 20 23:48:00.863: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13 container client-container: STEP: delete the pod May 20 23:48:00.893: INFO: Waiting for pod downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13 to disappear May 20 23:48:00.907: INFO: Pod downwardapi-volume-e0438f02-3ce2-483a-a5f1-e15126e92a13 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:00.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3428" for this suite. • [SLOW TEST:6.451 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":37,"skipped":762,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:00.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 20 23:48:01.009: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-432 /api/v1/namespaces/watch-432/configmaps/e2e-watch-test-watch-closed ad134eea-a253-480f-9b22-8a4b8c613cec 6342688 0 2020-05-20 23:48:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-20 23:48:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 20 23:48:01.009: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-432 /api/v1/namespaces/watch-432/configmaps/e2e-watch-test-watch-closed ad134eea-a253-480f-9b22-8a4b8c613cec 6342689 0 2020-05-20 23:48:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 20 23:48:01.023: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-432 /api/v1/namespaces/watch-432/configmaps/e2e-watch-test-watch-closed ad134eea-a253-480f-9b22-8a4b8c613cec 6342690 0 2020-05-20 23:48:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 20 23:48:01.023: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-432 /api/v1/namespaces/watch-432/configmaps/e2e-watch-test-watch-closed ad134eea-a253-480f-9b22-8a4b8c613cec 6342691 0 2020-05-20 23:48:00 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:01.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-432" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":38,"skipped":774,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:01.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:48:01.119: INFO: Creating deployment "webserver-deployment" May 20 23:48:01.124: INFO: Waiting for observed generation 1 May 20 23:48:03.147: INFO: Waiting for all required pods to come up May 20 23:48:03.152: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 20 23:48:13.162: INFO: Waiting for deployment "webserver-deployment" to complete May 20 23:48:13.168: INFO: Updating deployment "webserver-deployment" with a non-existent image May 20 23:48:13.174: INFO: Updating deployment webserver-deployment May 20 23:48:13.174: INFO: Waiting for observed generation 2 May 20 23:48:15.210: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 20 23:48:15.214: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 20 23:48:15.218: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 20 23:48:15.226: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 20 23:48:15.226: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 20 23:48:15.228: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 20 23:48:15.231: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 20 23:48:15.231: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 20 23:48:15.237: INFO: Updating deployment webserver-deployment May 20 23:48:15.237: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 20 23:48:15.383: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 20 23:48:15.426: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 20 23:48:16.196: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1380 /apis/apps/v1/namespaces/deployment-1380/deployments/webserver-deployment 3a9bc5a2-cb40-4f8a-a5a1-9f6a2a721b4f 6342931 3 2020-05-20 23:48:01 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00080f8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-20 23:48:13 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-20 23:48:15 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 20 23:48:16.806: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-1380 /apis/apps/v1/namespaces/deployment-1380/replicasets/webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 6342990 3 2020-05-20 23:48:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 3a9bc5a2-cb40-4f8a-a5a1-9f6a2a721b4f 0xc00080fd67 0xc00080fd68}] [] [{kube-controller-manager Update apps/v1 2020-05-20 23:48:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3a9bc5a2-cb40-4f8a-a5a1-9f6a2a721b4f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00080fde8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 20 23:48:16.806: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 20 23:48:16.806: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-1380 /apis/apps/v1/namespaces/deployment-1380/replicasets/webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 6342961 3 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 3a9bc5a2-cb40-4f8a-a5a1-9f6a2a721b4f 0xc00080fe67 0xc00080fe68}] [] [{kube-controller-manager Update apps/v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3a9bc5a2-cb40-4f8a-a5a1-9f6a2a721b4f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00080fed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 20 23:48:16.987: INFO: Pod "webserver-deployment-6676bcd6d4-2fsdp" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2fsdp webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-2fsdp 7b1f01fb-abff-434a-a598-107ec8bc71a0 6342963 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26407 0xc002b26408}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.987: INFO: Pod "webserver-deployment-6676bcd6d4-2lnzt" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2lnzt webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-2lnzt 44f644ee-c546-4e0a-99cf-f694549d4466 6342987 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26547 0xc002b26548}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.987: INFO: Pod "webserver-deployment-6676bcd6d4-5cx8c" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-5cx8c webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-5cx8c c79df46f-acee-41f7-86ea-f31e5a19737b 6342991 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26687 0xc002b26688}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 23:48:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.987: INFO: Pod "webserver-deployment-6676bcd6d4-6cb8t" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6cb8t webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-6cb8t 5e74b925-fdad-4892-8b98-fe1e17220c9a 6342970 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26837 0xc002b26838}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.987: INFO: Pod "webserver-deployment-6676bcd6d4-dpd6r" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dpd6r webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-dpd6r 878a34d8-0b0e-4fa4-a923-48168b14a1aa 6342941 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26977 0xc002b26978}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.988: INFO: Pod "webserver-deployment-6676bcd6d4-h594v" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h594v webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-h594v eaba057c-3296-4452-aebe-19c9a5f283a3 6342957 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26ab7 0xc002b26ab8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.988: INFO: Pod "webserver-deployment-6676bcd6d4-hjjhc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hjjhc webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-hjjhc e2015d2b-e02a-46d0-9db0-f221fa916a15 6342964 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26bf7 0xc002b26bf8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.988: INFO: Pod "webserver-deployment-6676bcd6d4-kgxp4" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kgxp4 webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-kgxp4 9954d977-c11d-40c1-b755-6b0681943498 6342968 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26d37 0xc002b26d38}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.988: INFO: Pod "webserver-deployment-6676bcd6d4-mf864" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mf864 webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-mf864 b3b39375-9392-49dc-878c-d65b9e789b8f 6342910 0 2020-05-20 23:48:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b26e77 0xc002b26e78}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 23:48:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.988: INFO: Pod "webserver-deployment-6676bcd6d4-sfvwd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sfvwd webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-sfvwd bf77a7c2-3f86-4273-8c1e-b39311a9882c 6342888 0 2020-05-20 23:48:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b27027 0xc002b27028}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 23:48:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.988: INFO: Pod "webserver-deployment-6676bcd6d4-v22b6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-v22b6 webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-v22b6 8655e70d-7dd2-4032-8e1f-2f20864dcd1d 6342902 0 2020-05-20 23:48:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b271d7 0xc002b271d8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 23:48:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.988: INFO: Pod "webserver-deployment-6676bcd6d4-wftfm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-wftfm webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-wftfm aa91738e-093e-445a-a331-e286b1f4e729 6342908 0 2020-05-20 23:48:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b27387 0xc002b27388}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-20 23:48:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.989: INFO: Pod "webserver-deployment-6676bcd6d4-xz7fz" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xz7fz webserver-deployment-6676bcd6d4- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-6676bcd6d4-xz7fz cb4e960a-c657-4bcc-87f3-e56835167135 6342887 0 2020-05-20 23:48:13 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 b9257fde-cdb2-4f78-bd70-aac0873ad552 0xc002b27547 0xc002b27548}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9257fde-cdb2-4f78-bd70-aac0873ad552\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 23:48:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.989: INFO: Pod "webserver-deployment-84855cf797-59cqb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-59cqb webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-59cqb bfe0ccaa-1e91-4e2a-9fde-9b062f75df92 6342956 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002b276f7 0xc002b276f8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.989: INFO: Pod "webserver-deployment-84855cf797-5n7s6" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5n7s6 webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-5n7s6 192ec88c-1204-4a01-8567-313a8deaaf33 6342854 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002b27827 0xc002b27828}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.68,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d67a68d48532fde382207829011dbea96aa30156d291013ab51fe4901a12bb20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.989: INFO: Pod "webserver-deployment-84855cf797-5n88b" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5n88b webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-5n88b 82075811-b5d8-40fc-94e8-6ac51cb0b736 6342825 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002b279d7 0xc002b279d8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.67,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e182835a8f0d7618c098ea9e569c01764c7a78bfdfc646d37545422644d5302d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.989: INFO: Pod "webserver-deployment-84855cf797-5vcrb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5vcrb webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-5vcrb cba59451-9636-46b5-90bc-2c6615f315e3 6342833 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002b27b87 0xc002b27b88}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.84,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a8be44c51588393ba86df2fee9256964dd718493bb3b0e5a1cee6229cf22187e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.990: INFO: Pod "webserver-deployment-84855cf797-9z2lj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9z2lj webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-9z2lj d1ce1d41-ebf3-443d-85e1-81684c63d1c9 6342955 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002b27d37 0xc002b27d38}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.990: INFO: Pod "webserver-deployment-84855cf797-bgctq" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-bgctq webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-bgctq b81f7b5f-2d1e-4458-887c-7d3e6a13b554 6342967 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002b27e67 0xc002b27e68}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.990: INFO: Pod "webserver-deployment-84855cf797-c6sxb" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-c6sxb webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-c6sxb 995380e1-75ab-4961-94fd-4e3c689aac36 6342985 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002b27f97 0xc002b27f98}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 23:48:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.990: INFO: Pod "webserver-deployment-84855cf797-cj79p" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-cj79p webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-cj79p 17fd393b-37eb-4903-9f70-042abf875653 6342959 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae01c7 0xc002ae01c8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.990: INFO: Pod "webserver-deployment-84855cf797-dnt7l" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-dnt7l webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-dnt7l 5f836765-f7d4-43b2-939b-bed25e427af1 6342969 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae0317 0xc002ae0318}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.990: INFO: Pod "webserver-deployment-84855cf797-f4vbj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-f4vbj webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-f4vbj c8b4be2d-0fae-4d3b-b679-14bbd1fd47aa 6342965 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae0627 0xc002ae0628}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.991: INFO: Pod "webserver-deployment-84855cf797-fwmrv" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fwmrv webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-fwmrv bf5ffbbf-ea89-41ad-a20d-24c57c9e1aed 6342823 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae0807 0xc002ae0808}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.83,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2a8211b4b9102309980c0b0d4fa46ddb5e02728d03e7c5b365661015a8a97253,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.991: INFO: Pod "webserver-deployment-84855cf797-gjvqc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-gjvqc webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-gjvqc b7dc1621-0e07-4764-8d7d-2b061575b80a 6342804 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae0a47 0xc002ae0a48}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.66,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5c6e05c702e7fcc958e68df5f5d9409f363df5a3d15ae7b942d86bfc1f5bcf29,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.66,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.991: INFO: Pod "webserver-deployment-84855cf797-grdcv" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-grdcv webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-grdcv 3bdd53d2-b2fd-4b49-8c12-72670d514d9a 6342938 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae0c37 0xc002ae0c38}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.991: INFO: Pod "webserver-deployment-84855cf797-hbg5c" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hbg5c webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-hbg5c 91048c76-33b9-4b52-a7f6-7c116a3fbb97 6342998 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae0d67 0xc002ae0d68}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-20 23:48:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.992: INFO: Pod "webserver-deployment-84855cf797-hxq6m" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-hxq6m webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-hxq6m 30c8d049-f241-4fba-b178-955deb42684d 6342958 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae0ef7 0xc002ae0ef8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.992: INFO: Pod "webserver-deployment-84855cf797-kpg4n" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-kpg4n webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-kpg4n 6a42d72c-d384-4971-91fb-988d42cfe092 6342962 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae1027 0xc002ae1028}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.992: INFO: Pod "webserver-deployment-84855cf797-nl2mn" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nl2mn webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-nl2mn b442d681-4aa4-45cc-9842-5c3303149566 6342856 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae1157 0xc002ae1158}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.69\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.69,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3f8b52c6caaba8e08de80d161743ad55ec67ec6e7fadbc6e1f81363e8d5e6dfc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.992: INFO: Pod "webserver-deployment-84855cf797-qb4tw" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-qb4tw webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-qb4tw 9ffcbc46-bd49-4dc1-b936-66010a93d70f 6342815 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae1307 0xc002ae1308}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.82,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://700f5b9027e4c80bf80e2e8f0d9c201e09ac9fad2cbcbdeab9fcccbfc861824a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.992: INFO: Pod "webserver-deployment-84855cf797-rnd2d" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rnd2d webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-rnd2d ae0f5b1c-ca2d-44a1-b257-d18a33451948 6342790 0 2020-05-20 23:48:01 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae14c7 0xc002ae14c8}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:48:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.65,StartTime:2020-05-20 23:48:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:48:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c608855893174bdb679fbe2aadf0883684c5b1cff433a1906de8b273ea4ce095,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 20 23:48:16.993: INFO: Pod "webserver-deployment-84855cf797-vs5xp" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-vs5xp webserver-deployment-84855cf797- deployment-1380 /api/v1/namespaces/deployment-1380/pods/webserver-deployment-84855cf797-vs5xp d2bbd589-2dd6-4650-9fdc-ba854b08ae7d 6342966 0 2020-05-20 23:48:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 7d8b599c-cdbe-404c-9e24-42cc8e237311 0xc002ae1677 0xc002ae1678}] [] [{kube-controller-manager Update v1 2020-05-20 23:48:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d8b599c-cdbe-404c-9e24-42cc8e237311\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jbn8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jbn8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jbn8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:48:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:16.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1380" for this suite. • [SLOW TEST:16.189 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":39,"skipped":776,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:17.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-e315196d-31f6-47c7-a527-a8b4a97f4046 STEP: Creating a pod to test consume secrets May 20 23:48:17.584: INFO: Waiting up to 5m0s for pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0" in namespace "secrets-9691" to be "Succeeded or Failed" May 20 23:48:17.627: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Pending", Reason="", readiness=false. Elapsed: 43.294797ms May 20 23:48:19.719: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134887606s May 20 23:48:22.277: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.692931605s May 20 23:48:24.414: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.829938928s May 20 23:48:26.481: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.89676827s May 20 23:48:29.011: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.426573901s May 20 23:48:31.284: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.700070474s May 20 23:48:33.331: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.746542058s STEP: Saw pod success May 20 23:48:33.331: INFO: Pod "pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0" satisfied condition "Succeeded or Failed" May 20 23:48:33.487: INFO: Trying to get logs from node latest-worker pod pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0 container secret-volume-test: STEP: delete the pod May 20 23:48:33.985: INFO: Waiting for pod pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0 to disappear May 20 23:48:34.000: INFO: Pod pod-secrets-8471f34b-88e3-45a3-8deb-1999853312c0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:34.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9691" for this suite. • [SLOW TEST:16.791 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":785,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:34.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:48:34.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86" in namespace "projected-6822" to be "Succeeded or Failed" May 20 23:48:34.280: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86": Phase="Pending", Reason="", readiness=false. Elapsed: 56.062961ms May 20 23:48:36.513: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289901014s May 20 23:48:38.517: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293905633s May 20 23:48:40.521: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86": Phase="Running", Reason="", readiness=true. Elapsed: 6.297889902s May 20 23:48:42.526: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86": Phase="Running", Reason="", readiness=true. Elapsed: 8.302133067s May 20 23:48:44.529: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86": Phase="Running", Reason="", readiness=true. Elapsed: 10.305373519s May 20 23:48:46.685: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.461055494s STEP: Saw pod success May 20 23:48:46.685: INFO: Pod "downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86" satisfied condition "Succeeded or Failed" May 20 23:48:46.688: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86 container client-container: STEP: delete the pod May 20 23:48:46.941: INFO: Waiting for pod downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86 to disappear May 20 23:48:46.983: INFO: Pod downwardapi-volume-a105b680-b92f-4fcb-841e-54ccf769dd86 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:46.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6822" for this suite. • [SLOW TEST:12.983 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":789,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:46.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-4de85b23-0ba8-4295-9b5c-cb2ae4a133cc STEP: Creating a pod to test consume secrets May 20 23:48:47.153: INFO: Waiting up to 5m0s for pod "pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f" in namespace "secrets-6537" to be "Succeeded or Failed" May 20 23:48:47.445: INFO: Pod "pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 292.074377ms May 20 23:48:49.450: INFO: Pod "pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296671923s May 20 23:48:51.454: INFO: Pod "pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.301081147s STEP: Saw pod success May 20 23:48:51.454: INFO: Pod "pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f" satisfied condition "Succeeded or Failed" May 20 23:48:51.458: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f container secret-volume-test: STEP: delete the pod May 20 23:48:51.591: INFO: Waiting for pod pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f to disappear May 20 23:48:51.606: INFO: Pod pod-secrets-7a98fed5-5140-49b5-95af-98cfcbbe9f9f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:51.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6537" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":42,"skipped":801,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:51.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 23:48:55.762: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:55.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5055" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":812,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:55.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:48:56.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2569" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":44,"skipped":819,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:48:56.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 20 23:48:56.147: INFO: >>> kubeConfig: /root/.kube/config May 20 23:48:59.592: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:49:10.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5770" for this suite. • [SLOW TEST:14.330 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":45,"skipped":822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:49:10.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 20 23:49:10.443: INFO: Waiting up to 5m0s for pod "downward-api-7479358f-a969-41d5-88c4-78fd532765ce" in namespace "downward-api-6409" to be "Succeeded or Failed" May 20 23:49:10.446: INFO: Pod "downward-api-7479358f-a969-41d5-88c4-78fd532765ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.204101ms May 20 23:49:12.481: INFO: Pod "downward-api-7479358f-a969-41d5-88c4-78fd532765ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038658816s May 20 23:49:14.519: INFO: Pod "downward-api-7479358f-a969-41d5-88c4-78fd532765ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076139607s STEP: Saw pod success May 20 23:49:14.519: INFO: Pod "downward-api-7479358f-a969-41d5-88c4-78fd532765ce" satisfied condition "Succeeded or Failed" May 20 23:49:14.521: INFO: Trying to get logs from node latest-worker pod downward-api-7479358f-a969-41d5-88c4-78fd532765ce container dapi-container: STEP: delete the pod May 20 23:49:14.650: INFO: Waiting for pod downward-api-7479358f-a969-41d5-88c4-78fd532765ce to disappear May 20 23:49:14.659: INFO: Pod downward-api-7479358f-a969-41d5-88c4-78fd532765ce no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:49:14.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6409" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":46,"skipped":865,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:49:14.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-12e939d1-53fc-458a-8866-ef209293325c STEP: Creating a pod to test consume configMaps May 20 23:49:14.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af" in namespace "configmap-2557" to be "Succeeded or Failed" May 20 23:49:14.774: INFO: Pod "pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af": Phase="Pending", Reason="", readiness=false. Elapsed: 29.157248ms May 20 23:49:16.778: INFO: Pod "pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032704602s May 20 23:49:18.782: INFO: Pod "pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036472569s STEP: Saw pod success May 20 23:49:18.782: INFO: Pod "pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af" satisfied condition "Succeeded or Failed" May 20 23:49:18.784: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af container configmap-volume-test: STEP: delete the pod May 20 23:49:18.821: INFO: Waiting for pod pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af to disappear May 20 23:49:18.827: INFO: Pod pod-configmaps-ec6a7618-7988-469d-9c44-e21764ea88af no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:49:18.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2557" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":872,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:49:18.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 20 23:49:18.937: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 20 23:49:18.970: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 20 23:49:18.970: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 20 23:49:18.983: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 20 23:49:18.984: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 20 23:49:19.029: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 20 23:49:19.029: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 20 23:49:26.289: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:49:26.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5995" for this suite. • [SLOW TEST:7.538 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":48,"skipped":892,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:49:26.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-gc2n STEP: Creating a pod to test atomic-volume-subpath May 20 23:49:26.466: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gc2n" in namespace "subpath-3119" to be "Succeeded or Failed" May 20 23:49:26.511: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Pending", Reason="", readiness=false. Elapsed: 45.424618ms May 20 23:49:28.516: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050304056s May 20 23:49:30.521: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 4.055381865s May 20 23:49:32.525: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 6.059335275s May 20 23:49:34.529: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 8.063579811s May 20 23:49:36.534: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 10.067989301s May 20 23:49:38.538: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 12.072511785s May 20 23:49:40.542: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 14.076457442s May 20 23:49:42.545: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 16.079435719s May 20 23:49:44.550: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 18.084039s May 20 23:49:46.595: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 20.128981812s May 20 23:49:48.600: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 22.133632347s May 20 23:49:50.800: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Running", Reason="", readiness=true. Elapsed: 24.333608367s May 20 23:49:52.804: INFO: Pod "pod-subpath-test-secret-gc2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.337769306s STEP: Saw pod success May 20 23:49:52.804: INFO: Pod "pod-subpath-test-secret-gc2n" satisfied condition "Succeeded or Failed" May 20 23:49:52.807: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-gc2n container test-container-subpath-secret-gc2n: STEP: delete the pod May 20 23:49:52.860: INFO: Waiting for pod pod-subpath-test-secret-gc2n to disappear May 20 23:49:52.866: INFO: Pod pod-subpath-test-secret-gc2n no longer exists STEP: Deleting pod pod-subpath-test-secret-gc2n May 20 23:49:52.866: INFO: Deleting pod "pod-subpath-test-secret-gc2n" in namespace "subpath-3119" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:49:52.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3119" for this suite. • [SLOW TEST:26.522 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":49,"skipped":901,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:49:52.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-3fb64574-19ec-4b62-916b-b33621d1d13e STEP: Creating secret with name s-test-opt-upd-7af34b8f-c6d3-4e30-9ba1-f3996413e5c6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3fb64574-19ec-4b62-916b-b33621d1d13e STEP: Updating secret s-test-opt-upd-7af34b8f-c6d3-4e30-9ba1-f3996413e5c6 STEP: Creating secret with name s-test-opt-create-3e5967a2-77d3-46a9-9f54-65ca775039fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:50:01.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8916" for this suite. • [SLOW TEST:8.329 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":50,"skipped":920,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:50:01.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:50:01.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02" in namespace "downward-api-1776" to be "Succeeded or Failed" May 20 23:50:01.325: INFO: Pod "downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02": Phase="Pending", Reason="", readiness=false. Elapsed: 13.295275ms May 20 23:50:03.330: INFO: Pod "downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017711916s May 20 23:50:05.333: INFO: Pod "downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020934806s STEP: Saw pod success May 20 23:50:05.333: INFO: Pod "downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02" satisfied condition "Succeeded or Failed" May 20 23:50:05.335: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02 container client-container: STEP: delete the pod May 20 23:50:05.416: INFO: Waiting for pod downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02 to disappear May 20 23:50:05.418: INFO: Pod downwardapi-volume-253f5f3b-94e8-426b-a950-7c827eed0f02 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:50:05.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1776" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":51,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:50:05.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:50:05.799: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844" in namespace "downward-api-1481" to be "Succeeded or Failed" May 20 23:50:05.813: INFO: Pod "downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844": Phase="Pending", Reason="", readiness=false. Elapsed: 13.917956ms May 20 23:50:07.817: INFO: Pod "downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018509996s May 20 23:50:09.821: INFO: Pod "downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844": Phase="Running", Reason="", readiness=true. Elapsed: 4.021983684s May 20 23:50:11.837: INFO: Pod "downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037944503s STEP: Saw pod success May 20 23:50:11.837: INFO: Pod "downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844" satisfied condition "Succeeded or Failed" May 20 23:50:11.840: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844 container client-container: STEP: delete the pod May 20 23:50:11.912: INFO: Waiting for pod downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844 to disappear May 20 23:50:11.925: INFO: Pod downwardapi-volume-a44c0346-e4f9-4539-a860-8bab68145844 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:50:11.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1481" for this suite. • [SLOW TEST:6.507 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":52,"skipped":977,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:50:11.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-0c4aaf06-4544-41d8-a378-b064d993bf3b STEP: Creating a pod to test consume configMaps May 20 23:50:12.022: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432" in namespace "projected-3993" to be "Succeeded or Failed" May 20 23:50:12.063: INFO: Pod "pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432": Phase="Pending", Reason="", readiness=false. Elapsed: 40.808635ms May 20 23:50:14.067: INFO: Pod "pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044879011s May 20 23:50:16.071: INFO: Pod "pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04929676s STEP: Saw pod success May 20 23:50:16.071: INFO: Pod "pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432" satisfied condition "Succeeded or Failed" May 20 23:50:16.075: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432 container projected-configmap-volume-test: STEP: delete the pod May 20 23:50:16.096: INFO: Waiting for pod pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432 to disappear May 20 23:50:16.100: INFO: Pod pod-projected-configmaps-6139af02-e6b0-45bd-8663-8737f3c35432 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:50:16.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3993" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":53,"skipped":985,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:50:16.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 23:50:16.224: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 23:50:16.242: INFO: Waiting for terminating namespaces to be deleted... May 20 23:50:16.244: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 20 23:50:16.250: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 20 23:50:16.250: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 20 23:50:16.250: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 20 23:50:16.250: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 20 23:50:16.250: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 23:50:16.250: INFO: Container kindnet-cni ready: true, restart count 0 May 20 23:50:16.250: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 20 23:50:16.250: INFO: Container kube-proxy ready: true, restart count 0 May 20 23:50:16.250: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 20 23:50:16.292: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 20 23:50:16.292: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 20 23:50:16.292: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 20 23:50:16.292: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 20 23:50:16.292: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 23:50:16.292: INFO: Container kindnet-cni ready: true, restart count 0 May 20 23:50:16.292: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 20 23:50:16.292: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1610e181dbca8ae6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1610e181ddebdbc0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:50:17.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1902" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":54,"skipped":997,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:50:17.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 23:50:17.832: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 23:50:19.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615417, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615417, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615417, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615417, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 23:50:22.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:50:35.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3307" for this suite. STEP: Destroying namespace "webhook-3307-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.874 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":55,"skipped":1001,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:50:35.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 20 23:50:35.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-240' May 20 23:50:35.730: INFO: stderr: "" May 20 23:50:35.730: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 20 23:50:35.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-240' May 20 23:50:35.871: INFO: stderr: "" May 20 23:50:35.871: INFO: stdout: "update-demo-nautilus-9zpq4 update-demo-nautilus-xb9nn " May 20 23:50:35.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zpq4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-240' May 20 23:50:36.062: INFO: stderr: "" May 20 23:50:36.062: INFO: stdout: "" May 20 23:50:36.062: INFO: update-demo-nautilus-9zpq4 is created but not running May 20 23:50:41.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-240' May 20 23:50:41.179: INFO: stderr: "" May 20 23:50:41.179: INFO: stdout: "update-demo-nautilus-9zpq4 update-demo-nautilus-xb9nn " May 20 23:50:41.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zpq4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-240' May 20 23:50:41.284: INFO: stderr: "" May 20 23:50:41.284: INFO: stdout: "true" May 20 23:50:41.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9zpq4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-240' May 20 23:50:41.395: INFO: stderr: "" May 20 23:50:41.395: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 23:50:41.395: INFO: validating pod update-demo-nautilus-9zpq4 May 20 23:50:41.399: INFO: got data: { "image": "nautilus.jpg" } May 20 23:50:41.399: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 23:50:41.399: INFO: update-demo-nautilus-9zpq4 is verified up and running May 20 23:50:41.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xb9nn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-240' May 20 23:50:41.502: INFO: stderr: "" May 20 23:50:41.502: INFO: stdout: "true" May 20 23:50:41.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xb9nn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-240' May 20 23:50:41.617: INFO: stderr: "" May 20 23:50:41.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 20 23:50:41.617: INFO: validating pod update-demo-nautilus-xb9nn May 20 23:50:41.620: INFO: got data: { "image": "nautilus.jpg" } May 20 23:50:41.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 20 23:50:41.620: INFO: update-demo-nautilus-xb9nn is verified up and running STEP: using delete to clean up resources May 20 23:50:41.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-240' May 20 23:50:41.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:50:41.727: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 20 23:50:41.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-240' May 20 23:50:41.823: INFO: stderr: "No resources found in kubectl-240 namespace.\n" May 20 23:50:41.823: INFO: stdout: "" May 20 23:50:41.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-240 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 23:50:42.012: INFO: stderr: "" May 20 23:50:42.012: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:50:42.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-240" for this suite. • [SLOW TEST:6.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":56,"skipped":1006,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:50:42.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3650 May 20 23:50:46.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3650 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 20 23:50:47.061: INFO: stderr: "I0520 23:50:46.958876 1433 log.go:172] (0xc000c02f20) (0xc000544f00) Create stream\nI0520 23:50:46.958942 1433 log.go:172] (0xc000c02f20) (0xc000544f00) Stream added, broadcasting: 1\nI0520 23:50:46.961421 1433 log.go:172] (0xc000c02f20) Reply frame received for 1\nI0520 23:50:46.961459 1433 log.go:172] (0xc000c02f20) (0xc0005454a0) Create stream\nI0520 23:50:46.961471 1433 log.go:172] (0xc000c02f20) (0xc0005454a0) Stream added, broadcasting: 3\nI0520 23:50:46.963153 1433 log.go:172] (0xc000c02f20) Reply frame received for 3\nI0520 23:50:46.963174 1433 log.go:172] (0xc000c02f20) (0xc00023ab40) Create stream\nI0520 23:50:46.963182 1433 log.go:172] (0xc000c02f20) (0xc00023ab40) Stream added, broadcasting: 5\nI0520 23:50:46.965031 1433 log.go:172] (0xc000c02f20) Reply frame received for 5\nI0520 23:50:47.048233 1433 log.go:172] (0xc000c02f20) Data frame received for 5\nI0520 23:50:47.048257 1433 log.go:172] (0xc00023ab40) (5) Data frame handling\nI0520 23:50:47.048271 1433 log.go:172] (0xc00023ab40) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0520 23:50:47.053930 1433 log.go:172] (0xc000c02f20) Data frame received for 3\nI0520 23:50:47.053959 1433 log.go:172] (0xc0005454a0) (3) Data frame handling\nI0520 23:50:47.053991 1433 log.go:172] (0xc0005454a0) (3) Data frame sent\nI0520 23:50:47.054728 1433 log.go:172] (0xc000c02f20) Data frame received for 5\nI0520 23:50:47.054744 1433 log.go:172] (0xc00023ab40) (5) Data frame handling\nI0520 23:50:47.054779 1433 log.go:172] (0xc000c02f20) Data frame received for 3\nI0520 23:50:47.054804 1433 log.go:172] (0xc0005454a0) (3) Data frame handling\nI0520 23:50:47.056541 1433 log.go:172] (0xc000c02f20) Data frame received for 1\nI0520 23:50:47.056565 1433 log.go:172] (0xc000544f00) (1) Data frame handling\nI0520 23:50:47.056583 1433 log.go:172] (0xc000544f00) (1) Data frame sent\nI0520 23:50:47.056599 1433 log.go:172] (0xc000c02f20) (0xc000544f00) Stream removed, broadcasting: 1\nI0520 23:50:47.056613 1433 log.go:172] (0xc000c02f20) Go away received\nI0520 23:50:47.056902 1433 log.go:172] (0xc000c02f20) (0xc000544f00) Stream removed, broadcasting: 1\nI0520 23:50:47.056919 1433 log.go:172] (0xc000c02f20) (0xc0005454a0) Stream removed, broadcasting: 3\nI0520 23:50:47.056936 1433 log.go:172] (0xc000c02f20) (0xc00023ab40) Stream removed, broadcasting: 5\n" May 20 23:50:47.062: INFO: stdout: "iptables" May 20 23:50:47.062: INFO: proxyMode: iptables May 20 23:50:47.065: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 23:50:47.128: INFO: Pod kube-proxy-mode-detector still exists May 20 23:50:49.129: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 23:50:49.133: INFO: Pod kube-proxy-mode-detector still exists May 20 23:50:51.129: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 20 23:50:51.132: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3650 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3650 I0520 23:50:51.175485 8 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3650, replica count: 3 I0520 23:50:54.230674 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:50:57.230931 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 23:50:57.236: INFO: Creating new exec pod May 20 23:51:02.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3650 execpod-affinityhpkng -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 20 23:51:02.510: INFO: stderr: "I0520 23:51:02.408353 1452 log.go:172] (0xc000adb4a0) (0xc000b74140) Create stream\nI0520 23:51:02.408410 1452 log.go:172] (0xc000adb4a0) (0xc000b74140) Stream added, broadcasting: 1\nI0520 23:51:02.414518 1452 log.go:172] (0xc000adb4a0) Reply frame received for 1\nI0520 23:51:02.414567 1452 log.go:172] (0xc000adb4a0) (0xc000714dc0) Create stream\nI0520 23:51:02.414582 1452 log.go:172] (0xc000adb4a0) (0xc000714dc0) Stream added, broadcasting: 3\nI0520 23:51:02.415476 1452 log.go:172] (0xc000adb4a0) Reply frame received for 3\nI0520 23:51:02.415518 1452 log.go:172] (0xc000adb4a0) (0xc0006e8be0) Create stream\nI0520 23:51:02.415539 1452 log.go:172] (0xc000adb4a0) (0xc0006e8be0) Stream added, broadcasting: 5\nI0520 23:51:02.416544 1452 log.go:172] (0xc000adb4a0) Reply frame received for 5\nI0520 23:51:02.502312 1452 log.go:172] (0xc000adb4a0) Data frame received for 5\nI0520 23:51:02.502356 1452 log.go:172] (0xc0006e8be0) (5) Data frame handling\nI0520 23:51:02.502386 1452 log.go:172] (0xc0006e8be0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0520 23:51:02.502691 1452 log.go:172] (0xc000adb4a0) Data frame received for 5\nI0520 23:51:02.502709 1452 log.go:172] (0xc0006e8be0) (5) Data frame handling\nI0520 23:51:02.502725 1452 log.go:172] (0xc0006e8be0) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0520 23:51:02.503119 1452 log.go:172] (0xc000adb4a0) Data frame received for 5\nI0520 23:51:02.503162 1452 log.go:172] (0xc0006e8be0) (5) Data frame handling\nI0520 23:51:02.503386 1452 log.go:172] (0xc000adb4a0) Data frame received for 3\nI0520 23:51:02.503430 1452 log.go:172] (0xc000714dc0) (3) Data frame handling\nI0520 23:51:02.505061 1452 log.go:172] (0xc000adb4a0) Data frame received for 1\nI0520 23:51:02.505092 1452 log.go:172] (0xc000b74140) (1) Data frame handling\nI0520 23:51:02.505131 1452 log.go:172] (0xc000b74140) (1) Data frame sent\nI0520 23:51:02.505150 1452 log.go:172] (0xc000adb4a0) (0xc000b74140) Stream removed, broadcasting: 1\nI0520 23:51:02.505171 1452 log.go:172] (0xc000adb4a0) Go away received\nI0520 23:51:02.505515 1452 log.go:172] (0xc000adb4a0) (0xc000b74140) Stream removed, broadcasting: 1\nI0520 23:51:02.505540 1452 log.go:172] (0xc000adb4a0) (0xc000714dc0) Stream removed, broadcasting: 3\nI0520 23:51:02.505553 1452 log.go:172] (0xc000adb4a0) (0xc0006e8be0) Stream removed, broadcasting: 5\n" May 20 23:51:02.510: INFO: stdout: "" May 20 23:51:02.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3650 execpod-affinityhpkng -- /bin/sh -x -c nc -zv -t -w 2 10.102.81.161 80' May 20 23:51:02.735: INFO: stderr: "I0520 23:51:02.645366 1474 log.go:172] (0xc0000e8370) (0xc0006fc780) Create stream\nI0520 23:51:02.645421 1474 log.go:172] (0xc0000e8370) (0xc0006fc780) Stream added, broadcasting: 1\nI0520 23:51:02.647121 1474 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0520 23:51:02.647157 1474 log.go:172] (0xc0000e8370) (0xc00051cf00) Create stream\nI0520 23:51:02.647166 1474 log.go:172] (0xc0000e8370) (0xc00051cf00) Stream added, broadcasting: 3\nI0520 23:51:02.648274 1474 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0520 23:51:02.648315 1474 log.go:172] (0xc0000e8370) (0xc0000f39a0) Create stream\nI0520 23:51:02.648340 1474 log.go:172] (0xc0000e8370) (0xc0000f39a0) Stream added, broadcasting: 5\nI0520 23:51:02.649771 1474 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0520 23:51:02.728192 1474 log.go:172] (0xc0000e8370) Data frame received for 3\nI0520 23:51:02.728217 1474 log.go:172] (0xc00051cf00) (3) Data frame handling\nI0520 23:51:02.728290 1474 log.go:172] (0xc0000e8370) Data frame received for 5\nI0520 23:51:02.728360 1474 log.go:172] (0xc0000f39a0) (5) Data frame handling\nI0520 23:51:02.728393 1474 log.go:172] (0xc0000f39a0) (5) Data frame sent\nI0520 23:51:02.728407 1474 log.go:172] (0xc0000e8370) Data frame received for 5\nI0520 23:51:02.728422 1474 log.go:172] (0xc0000f39a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.81.161 80\nConnection to 10.102.81.161 80 port [tcp/http] succeeded!\nI0520 23:51:02.729982 1474 log.go:172] (0xc0000e8370) Data frame received for 1\nI0520 23:51:02.730007 1474 log.go:172] (0xc0006fc780) (1) Data frame handling\nI0520 23:51:02.730026 1474 log.go:172] (0xc0006fc780) (1) Data frame sent\nI0520 23:51:02.730046 1474 log.go:172] (0xc0000e8370) (0xc0006fc780) Stream removed, broadcasting: 1\nI0520 23:51:02.730066 1474 log.go:172] (0xc0000e8370) Go away received\nI0520 23:51:02.730443 1474 log.go:172] (0xc0000e8370) (0xc0006fc780) Stream removed, broadcasting: 1\nI0520 23:51:02.730460 1474 log.go:172] (0xc0000e8370) (0xc00051cf00) Stream removed, broadcasting: 3\nI0520 23:51:02.730468 1474 log.go:172] (0xc0000e8370) (0xc0000f39a0) Stream removed, broadcasting: 5\n" May 20 23:51:02.735: INFO: stdout: "" May 20 23:51:02.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3650 execpod-affinityhpkng -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.81.161:80/ ; done' May 20 23:51:03.040: INFO: stderr: "I0520 23:51:02.878768 1496 log.go:172] (0xc000928790) (0xc00069f680) Create stream\nI0520 23:51:02.878859 1496 log.go:172] (0xc000928790) (0xc00069f680) Stream added, broadcasting: 1\nI0520 23:51:02.882030 1496 log.go:172] (0xc000928790) Reply frame received for 1\nI0520 23:51:02.882078 1496 log.go:172] (0xc000928790) (0xc000548f00) Create stream\nI0520 23:51:02.882093 1496 log.go:172] (0xc000928790) (0xc000548f00) Stream added, broadcasting: 3\nI0520 23:51:02.883394 1496 log.go:172] (0xc000928790) Reply frame received for 3\nI0520 23:51:02.883433 1496 log.go:172] (0xc000928790) (0xc0000dcf00) Create stream\nI0520 23:51:02.883442 1496 log.go:172] (0xc000928790) (0xc0000dcf00) Stream added, broadcasting: 5\nI0520 23:51:02.884733 1496 log.go:172] (0xc000928790) Reply frame received for 5\nI0520 23:51:02.949873 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.949914 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.949949 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.949991 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.950005 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.950026 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.953963 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.954001 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.954032 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.954578 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.954633 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.954744 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.954781 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.954799 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.954822 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.958541 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.958573 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.958603 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.959401 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.959432 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.959450 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.959475 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.959503 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.959528 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.966473 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.966505 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.966542 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.966793 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.966811 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.966826 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.966851 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.966862 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.966878 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.972352 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.972370 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.972381 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.973576 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.973608 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.973634 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.973817 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.973838 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.973856 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.978656 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.978683 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.978705 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.978891 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.978915 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.978948 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.978971 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.978999 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.979020 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.983470 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.983495 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.983522 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.983821 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.983848 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.983861 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.983879 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.983898 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.983930 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.987857 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.987877 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.987895 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.988276 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.988300 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.988340 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.988355 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.988373 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.988386 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.992400 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.992436 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.992468 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.992819 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.992840 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.992847 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.992859 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.992864 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.992870 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:02.997409 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.997463 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.997500 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.997876 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:02.997890 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:02.997898 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:02.997909 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:02.997915 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:02.997923 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.001930 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.001952 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.001976 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.002429 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.002444 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.002454 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.002483 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:03.002509 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:03.002531 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.006273 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.006303 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.006333 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.006675 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:03.006722 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:03.006738 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.006757 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.006779 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.006797 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.011075 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.011194 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.011240 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.011572 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.011587 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.011595 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.011615 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:03.011651 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:03.011684 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.015814 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.015835 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.015856 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.016239 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.016280 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.016296 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.016314 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:03.016323 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:03.016339 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.020597 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.020612 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.020620 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.021570 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:03.021589 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:03.021604 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.021824 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.021836 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.021848 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.026392 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.026419 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.026441 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.027036 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.027070 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:03.027105 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:03.027123 1496 log.go:172] (0xc0000dcf00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.027149 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.027160 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.031389 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.031421 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.031442 1496 log.go:172] (0xc000548f00) (3) Data frame sent\nI0520 23:51:03.032308 1496 log.go:172] (0xc000928790) Data frame received for 3\nI0520 23:51:03.032345 1496 log.go:172] (0xc000548f00) (3) Data frame handling\nI0520 23:51:03.032396 1496 log.go:172] (0xc000928790) Data frame received for 5\nI0520 23:51:03.032425 1496 log.go:172] (0xc0000dcf00) (5) Data frame handling\nI0520 23:51:03.034456 1496 log.go:172] (0xc000928790) Data frame received for 1\nI0520 23:51:03.034484 1496 log.go:172] (0xc00069f680) (1) Data frame handling\nI0520 23:51:03.034517 1496 log.go:172] (0xc00069f680) (1) Data frame sent\nI0520 23:51:03.034538 1496 log.go:172] (0xc000928790) (0xc00069f680) Stream removed, broadcasting: 1\nI0520 23:51:03.034556 1496 log.go:172] (0xc000928790) Go away received\nI0520 23:51:03.035001 1496 log.go:172] (0xc000928790) (0xc00069f680) Stream removed, broadcasting: 1\nI0520 23:51:03.035027 1496 log.go:172] (0xc000928790) (0xc000548f00) Stream removed, broadcasting: 3\nI0520 23:51:03.035042 1496 log.go:172] (0xc000928790) (0xc0000dcf00) Stream removed, broadcasting: 5\n" May 20 23:51:03.041: INFO: stdout: "\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8\naffinity-clusterip-timeout-4snf8" May 20 23:51:03.041: INFO: Received response from host: May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Received response from host: affinity-clusterip-timeout-4snf8 May 20 23:51:03.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3650 execpod-affinityhpkng -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.81.161:80/' May 20 23:51:03.252: INFO: stderr: "I0520 23:51:03.164933 1517 log.go:172] (0xc00003b600) (0xc00070bb80) Create stream\nI0520 23:51:03.165012 1517 log.go:172] (0xc00003b600) (0xc00070bb80) Stream added, broadcasting: 1\nI0520 23:51:03.167610 1517 log.go:172] (0xc00003b600) Reply frame received for 1\nI0520 23:51:03.167665 1517 log.go:172] (0xc00003b600) (0xc000389d60) Create stream\nI0520 23:51:03.167681 1517 log.go:172] (0xc00003b600) (0xc000389d60) Stream added, broadcasting: 3\nI0520 23:51:03.168625 1517 log.go:172] (0xc00003b600) Reply frame received for 3\nI0520 23:51:03.168663 1517 log.go:172] (0xc00003b600) (0xc00073a5a0) Create stream\nI0520 23:51:03.168682 1517 log.go:172] (0xc00003b600) (0xc00073a5a0) Stream added, broadcasting: 5\nI0520 23:51:03.169819 1517 log.go:172] (0xc00003b600) Reply frame received for 5\nI0520 23:51:03.236384 1517 log.go:172] (0xc00003b600) Data frame received for 5\nI0520 23:51:03.236416 1517 log.go:172] (0xc00073a5a0) (5) Data frame handling\nI0520 23:51:03.236438 1517 log.go:172] (0xc00073a5a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:03.240828 1517 log.go:172] (0xc00003b600) Data frame received for 3\nI0520 23:51:03.240861 1517 log.go:172] (0xc000389d60) (3) Data frame handling\nI0520 23:51:03.240889 1517 log.go:172] (0xc000389d60) (3) Data frame sent\nI0520 23:51:03.242261 1517 log.go:172] (0xc00003b600) Data frame received for 3\nI0520 23:51:03.242278 1517 log.go:172] (0xc000389d60) (3) Data frame handling\nI0520 23:51:03.242483 1517 log.go:172] (0xc00003b600) Data frame received for 5\nI0520 23:51:03.242506 1517 log.go:172] (0xc00073a5a0) (5) Data frame handling\nI0520 23:51:03.243830 1517 log.go:172] (0xc00003b600) Data frame received for 1\nI0520 23:51:03.243862 1517 log.go:172] (0xc00070bb80) (1) Data frame handling\nI0520 23:51:03.243894 1517 log.go:172] (0xc00070bb80) (1) Data frame sent\nI0520 23:51:03.243916 1517 log.go:172] (0xc00003b600) (0xc00070bb80) Stream removed, broadcasting: 1\nI0520 23:51:03.244142 1517 log.go:172] (0xc00003b600) Go away received\nI0520 23:51:03.244498 1517 log.go:172] (0xc00003b600) (0xc00070bb80) Stream removed, broadcasting: 1\nI0520 23:51:03.244522 1517 log.go:172] (0xc00003b600) (0xc000389d60) Stream removed, broadcasting: 3\nI0520 23:51:03.244533 1517 log.go:172] (0xc00003b600) (0xc00073a5a0) Stream removed, broadcasting: 5\n" May 20 23:51:03.252: INFO: stdout: "affinity-clusterip-timeout-4snf8" May 20 23:51:18.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3650 execpod-affinityhpkng -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.81.161:80/' May 20 23:51:18.466: INFO: stderr: "I0520 23:51:18.378386 1537 log.go:172] (0xc000b751e0) (0xc00015dae0) Create stream\nI0520 23:51:18.378431 1537 log.go:172] (0xc000b751e0) (0xc00015dae0) Stream added, broadcasting: 1\nI0520 23:51:18.380201 1537 log.go:172] (0xc000b751e0) Reply frame received for 1\nI0520 23:51:18.380232 1537 log.go:172] (0xc000b751e0) (0xc00073cf00) Create stream\nI0520 23:51:18.380251 1537 log.go:172] (0xc000b751e0) (0xc00073cf00) Stream added, broadcasting: 3\nI0520 23:51:18.381044 1537 log.go:172] (0xc000b751e0) Reply frame received for 3\nI0520 23:51:18.381067 1537 log.go:172] (0xc000b751e0) (0xc000612460) Create stream\nI0520 23:51:18.381077 1537 log.go:172] (0xc000b751e0) (0xc000612460) Stream added, broadcasting: 5\nI0520 23:51:18.382121 1537 log.go:172] (0xc000b751e0) Reply frame received for 5\nI0520 23:51:18.456422 1537 log.go:172] (0xc000b751e0) Data frame received for 5\nI0520 23:51:18.456454 1537 log.go:172] (0xc000612460) (5) Data frame handling\nI0520 23:51:18.456478 1537 log.go:172] (0xc000612460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:18.459590 1537 log.go:172] (0xc000b751e0) Data frame received for 3\nI0520 23:51:18.459616 1537 log.go:172] (0xc00073cf00) (3) Data frame handling\nI0520 23:51:18.459633 1537 log.go:172] (0xc00073cf00) (3) Data frame sent\nI0520 23:51:18.459816 1537 log.go:172] (0xc000b751e0) Data frame received for 5\nI0520 23:51:18.459831 1537 log.go:172] (0xc000612460) (5) Data frame handling\nI0520 23:51:18.460009 1537 log.go:172] (0xc000b751e0) Data frame received for 3\nI0520 23:51:18.460023 1537 log.go:172] (0xc00073cf00) (3) Data frame handling\nI0520 23:51:18.461743 1537 log.go:172] (0xc000b751e0) Data frame received for 1\nI0520 23:51:18.461761 1537 log.go:172] (0xc00015dae0) (1) Data frame handling\nI0520 23:51:18.461770 1537 log.go:172] (0xc00015dae0) (1) Data frame sent\nI0520 23:51:18.461787 1537 log.go:172] (0xc000b751e0) (0xc00015dae0) Stream removed, broadcasting: 1\nI0520 23:51:18.461806 1537 log.go:172] (0xc000b751e0) Go away received\nI0520 23:51:18.462237 1537 log.go:172] (0xc000b751e0) (0xc00015dae0) Stream removed, broadcasting: 1\nI0520 23:51:18.462255 1537 log.go:172] (0xc000b751e0) (0xc00073cf00) Stream removed, broadcasting: 3\nI0520 23:51:18.462264 1537 log.go:172] (0xc000b751e0) (0xc000612460) Stream removed, broadcasting: 5\n" May 20 23:51:18.466: INFO: stdout: "affinity-clusterip-timeout-4snf8" May 20 23:51:33.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3650 execpod-affinityhpkng -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.102.81.161:80/' May 20 23:51:33.698: INFO: stderr: "I0520 23:51:33.594572 1558 log.go:172] (0xc000c16000) (0xc000630e60) Create stream\nI0520 23:51:33.594654 1558 log.go:172] (0xc000c16000) (0xc000630e60) Stream added, broadcasting: 1\nI0520 23:51:33.596276 1558 log.go:172] (0xc000c16000) Reply frame received for 1\nI0520 23:51:33.596321 1558 log.go:172] (0xc000c16000) (0xc0004eea00) Create stream\nI0520 23:51:33.596332 1558 log.go:172] (0xc000c16000) (0xc0004eea00) Stream added, broadcasting: 3\nI0520 23:51:33.597420 1558 log.go:172] (0xc000c16000) Reply frame received for 3\nI0520 23:51:33.597469 1558 log.go:172] (0xc000c16000) (0xc0003ac8c0) Create stream\nI0520 23:51:33.597478 1558 log.go:172] (0xc000c16000) (0xc0003ac8c0) Stream added, broadcasting: 5\nI0520 23:51:33.598185 1558 log.go:172] (0xc000c16000) Reply frame received for 5\nI0520 23:51:33.683302 1558 log.go:172] (0xc000c16000) Data frame received for 5\nI0520 23:51:33.683331 1558 log.go:172] (0xc0003ac8c0) (5) Data frame handling\nI0520 23:51:33.683352 1558 log.go:172] (0xc0003ac8c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.81.161:80/\nI0520 23:51:33.687372 1558 log.go:172] (0xc000c16000) Data frame received for 3\nI0520 23:51:33.687405 1558 log.go:172] (0xc0004eea00) (3) Data frame handling\nI0520 23:51:33.687429 1558 log.go:172] (0xc0004eea00) (3) Data frame sent\nI0520 23:51:33.687811 1558 log.go:172] (0xc000c16000) Data frame received for 5\nI0520 23:51:33.687837 1558 log.go:172] (0xc0003ac8c0) (5) Data frame handling\nI0520 23:51:33.687864 1558 log.go:172] (0xc000c16000) Data frame received for 3\nI0520 23:51:33.687889 1558 log.go:172] (0xc0004eea00) (3) Data frame handling\nI0520 23:51:33.689870 1558 log.go:172] (0xc000c16000) Data frame received for 1\nI0520 23:51:33.689897 1558 log.go:172] (0xc000630e60) (1) Data frame handling\nI0520 23:51:33.689924 1558 log.go:172] (0xc000630e60) (1) Data frame sent\nI0520 23:51:33.689947 1558 log.go:172] (0xc000c16000) (0xc000630e60) Stream removed, broadcasting: 1\nI0520 23:51:33.689974 1558 log.go:172] (0xc000c16000) Go away received\nI0520 23:51:33.690386 1558 log.go:172] (0xc000c16000) (0xc000630e60) Stream removed, broadcasting: 1\nI0520 23:51:33.690414 1558 log.go:172] (0xc000c16000) (0xc0004eea00) Stream removed, broadcasting: 3\nI0520 23:51:33.690431 1558 log.go:172] (0xc000c16000) (0xc0003ac8c0) Stream removed, broadcasting: 5\n" May 20 23:51:33.698: INFO: stdout: "affinity-clusterip-timeout-r2nzp" May 20 23:51:33.699: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3650, will wait for the garbage collector to delete the pods May 20 23:51:33.902: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.574605ms May 20 23:51:34.403: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.224324ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:51:44.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3650" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:62.954 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":57,"skipped":1032,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:51:44.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:51:50.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4416" for this suite. • [SLOW TEST:5.237 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":58,"skipped":1051,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:51:50.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 20 23:51:50.278: INFO: namespace kubectl-3007 May 20 23:51:50.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3007' May 20 23:51:50.574: INFO: stderr: "" May 20 23:51:50.574: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 20 23:51:51.578: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:51:51.578: INFO: Found 0 / 1 May 20 23:51:52.579: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:51:52.579: INFO: Found 0 / 1 May 20 23:51:53.590: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:51:53.590: INFO: Found 1 / 1 May 20 23:51:53.590: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 20 23:51:53.601: INFO: Selector matched 1 pods for map[app:agnhost] May 20 23:51:53.601: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 20 23:51:53.601: INFO: wait on agnhost-master startup in kubectl-3007 May 20 23:51:53.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-4hxc5 agnhost-master --namespace=kubectl-3007' May 20 23:51:53.724: INFO: stderr: "" May 20 23:51:53.724: INFO: stdout: "Paused\n" STEP: exposing RC May 20 23:51:53.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3007' May 20 23:51:53.860: INFO: stderr: "" May 20 23:51:53.860: INFO: stdout: "service/rm2 exposed\n" May 20 23:51:53.871: INFO: Service rm2 in namespace kubectl-3007 found. STEP: exposing service May 20 23:51:55.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3007' May 20 23:51:56.005: INFO: stderr: "" May 20 23:51:56.005: INFO: stdout: "service/rm3 exposed\n" May 20 23:51:56.015: INFO: Service rm3 in namespace kubectl-3007 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:51:58.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3007" for this suite. • [SLOW TEST:7.818 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":59,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:51:58.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-f7vr STEP: Creating a pod to test atomic-volume-subpath May 20 23:51:58.158: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f7vr" in namespace "subpath-6975" to be "Succeeded or Failed" May 20 23:51:58.174: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Pending", Reason="", readiness=false. Elapsed: 16.163968ms May 20 23:52:00.192: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033884111s May 20 23:52:02.197: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 4.038829992s May 20 23:52:04.225: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 6.066970335s May 20 23:52:06.230: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 8.071962121s May 20 23:52:08.235: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 10.076819622s May 20 23:52:10.239: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 12.081185704s May 20 23:52:12.243: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 14.085148208s May 20 23:52:14.248: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 16.089907321s May 20 23:52:16.253: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 18.094783018s May 20 23:52:18.258: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 20.0993531s May 20 23:52:20.273: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Running", Reason="", readiness=true. Elapsed: 22.114684234s May 20 23:52:22.277: INFO: Pod "pod-subpath-test-configmap-f7vr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.118701504s STEP: Saw pod success May 20 23:52:22.277: INFO: Pod "pod-subpath-test-configmap-f7vr" satisfied condition "Succeeded or Failed" May 20 23:52:22.280: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-f7vr container test-container-subpath-configmap-f7vr: STEP: delete the pod May 20 23:52:22.365: INFO: Waiting for pod pod-subpath-test-configmap-f7vr to disappear May 20 23:52:22.368: INFO: Pod pod-subpath-test-configmap-f7vr no longer exists STEP: Deleting pod pod-subpath-test-configmap-f7vr May 20 23:52:22.368: INFO: Deleting pod "pod-subpath-test-configmap-f7vr" in namespace "subpath-6975" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:52:22.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6975" for this suite. • [SLOW TEST:24.347 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":60,"skipped":1071,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:52:22.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 23:52:27.466: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:52:27.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-431" for this suite. • [SLOW TEST:5.140 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":1074,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:52:27.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0520 23:52:28.393786 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 20 23:52:28.393: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:52:28.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8465" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":62,"skipped":1084,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:52:28.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 20 23:52:36.538: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 23:52:36.558: INFO: Pod pod-with-poststart-http-hook still exists May 20 23:52:38.559: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 23:52:38.563: INFO: Pod pod-with-poststart-http-hook still exists May 20 23:52:40.559: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 20 23:52:40.562: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:52:40.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1903" for this suite. • [SLOW TEST:12.169 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":63,"skipped":1099,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:52:40.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-3393/configmap-test-5297f956-a311-4c79-9b14-d7d313d76a04 STEP: Creating a pod to test consume configMaps May 20 23:52:40.693: INFO: Waiting up to 5m0s for pod "pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd" in namespace "configmap-3393" to be "Succeeded or Failed" May 20 23:52:40.696: INFO: Pod "pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.086355ms May 20 23:52:42.722: INFO: Pod "pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029339241s May 20 23:52:44.726: INFO: Pod "pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033267336s STEP: Saw pod success May 20 23:52:44.726: INFO: Pod "pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd" satisfied condition "Succeeded or Failed" May 20 23:52:44.729: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd container env-test: STEP: delete the pod May 20 23:52:44.787: INFO: Waiting for pod pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd to disappear May 20 23:52:44.801: INFO: Pod pod-configmaps-7db6c527-e7c9-4a22-8e5e-258fe37f07dd no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:52:44.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3393" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":64,"skipped":1135,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:52:44.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:52:44.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1600" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":65,"skipped":1141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:52:44.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:52:44.949: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6167 I0520 23:52:44.976885 8 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6167, replica count: 1 I0520 23:52:46.027246 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:52:47.027493 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:52:48.027807 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:52:49.028039 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 23:52:49.190: INFO: Created: latency-svc-rz92w May 20 23:52:49.198: INFO: Got endpoints: latency-svc-rz92w [69.770568ms] May 20 23:52:49.243: INFO: Created: latency-svc-nskkk May 20 23:52:49.260: INFO: Got endpoints: latency-svc-nskkk [62.324944ms] May 20 23:52:49.279: INFO: Created: latency-svc-69jg4 May 20 23:52:49.328: INFO: Got endpoints: latency-svc-69jg4 [130.46636ms] May 20 23:52:49.355: INFO: Created: latency-svc-mnkmn May 20 23:52:49.526: INFO: Got endpoints: latency-svc-mnkmn [328.162273ms] May 20 23:52:49.532: INFO: Created: latency-svc-dbdts May 20 23:52:49.554: INFO: Got endpoints: latency-svc-dbdts [356.248851ms] May 20 23:52:49.586: INFO: Created: latency-svc-28xdk May 20 23:52:49.616: INFO: Got endpoints: latency-svc-28xdk [418.482125ms] May 20 23:52:49.687: INFO: Created: latency-svc-xgqxj May 20 23:52:49.699: INFO: Got endpoints: latency-svc-xgqxj [500.948843ms] May 20 23:52:49.757: INFO: Created: latency-svc-w9pj8 May 20 23:52:49.813: INFO: Got endpoints: latency-svc-w9pj8 [615.075997ms] May 20 23:52:49.851: INFO: Created: latency-svc-km5ql May 20 23:52:49.868: INFO: Got endpoints: latency-svc-km5ql [670.460554ms] May 20 23:52:49.889: INFO: Created: latency-svc-t42k9 May 20 23:52:49.903: INFO: Got endpoints: latency-svc-t42k9 [705.139468ms] May 20 23:52:49.973: INFO: Created: latency-svc-fsfbm May 20 23:52:49.982: INFO: Got endpoints: latency-svc-fsfbm [783.620462ms] May 20 23:52:50.011: INFO: Created: latency-svc-j9dw6 May 20 23:52:50.027: INFO: Got endpoints: latency-svc-j9dw6 [828.746707ms] May 20 23:52:50.065: INFO: Created: latency-svc-l5dls May 20 23:52:50.118: INFO: Got endpoints: latency-svc-l5dls [919.758195ms] May 20 23:52:50.147: INFO: Created: latency-svc-22mmt May 20 23:52:50.165: INFO: Got endpoints: latency-svc-22mmt [967.021237ms] May 20 23:52:50.256: INFO: Created: latency-svc-mgpqd May 20 23:52:50.261: INFO: Got endpoints: latency-svc-mgpqd [1.062946673s] May 20 23:52:50.281: INFO: Created: latency-svc-9ghrq May 20 23:52:50.291: INFO: Got endpoints: latency-svc-9ghrq [1.093290153s] May 20 23:52:50.339: INFO: Created: latency-svc-4bpwr May 20 23:52:50.351: INFO: Got endpoints: latency-svc-4bpwr [1.091187778s] May 20 23:52:50.425: INFO: Created: latency-svc-frr46 May 20 23:52:50.450: INFO: Got endpoints: latency-svc-frr46 [1.121054556s] May 20 23:52:50.487: INFO: Created: latency-svc-x6gll May 20 23:52:50.543: INFO: Got endpoints: latency-svc-x6gll [1.016355938s] May 20 23:52:50.596: INFO: Created: latency-svc-6blbd May 20 23:52:50.619: INFO: Got endpoints: latency-svc-6blbd [1.06507603s] May 20 23:52:50.716: INFO: Created: latency-svc-2czrl May 20 23:52:50.725: INFO: Got endpoints: latency-svc-2czrl [1.108564672s] May 20 23:52:50.749: INFO: Created: latency-svc-ktfvh May 20 23:52:50.764: INFO: Got endpoints: latency-svc-ktfvh [1.065205044s] May 20 23:52:50.854: INFO: Created: latency-svc-bxcgz May 20 23:52:50.866: INFO: Got endpoints: latency-svc-bxcgz [1.053047312s] May 20 23:52:50.893: INFO: Created: latency-svc-2l266 May 20 23:52:50.902: INFO: Got endpoints: latency-svc-2l266 [1.033676757s] May 20 23:52:50.929: INFO: Created: latency-svc-dzt5d May 20 23:52:50.945: INFO: Got endpoints: latency-svc-dzt5d [1.042045072s] May 20 23:52:51.017: INFO: Created: latency-svc-ks88l May 20 23:52:51.047: INFO: Got endpoints: latency-svc-ks88l [1.064953286s] May 20 23:52:51.083: INFO: Created: latency-svc-2h26w May 20 23:52:51.095: INFO: Got endpoints: latency-svc-2h26w [1.068033337s] May 20 23:52:51.159: INFO: Created: latency-svc-2tvkr May 20 23:52:51.167: INFO: Got endpoints: latency-svc-2tvkr [1.049482662s] May 20 23:52:51.239: INFO: Created: latency-svc-2mxjm May 20 23:52:51.251: INFO: Got endpoints: latency-svc-2mxjm [1.086268026s] May 20 23:52:51.320: INFO: Created: latency-svc-s279c May 20 23:52:51.353: INFO: Got endpoints: latency-svc-s279c [1.091867961s] May 20 23:52:51.388: INFO: Created: latency-svc-fh9d2 May 20 23:52:51.460: INFO: Got endpoints: latency-svc-fh9d2 [1.168422021s] May 20 23:52:51.487: INFO: Created: latency-svc-stf8l May 20 23:52:51.504: INFO: Got endpoints: latency-svc-stf8l [1.152716843s] May 20 23:52:51.621: INFO: Created: latency-svc-lzmhl May 20 23:52:51.642: INFO: Got endpoints: latency-svc-lzmhl [1.192796101s] May 20 23:52:51.800: INFO: Created: latency-svc-d9682 May 20 23:52:51.830: INFO: Created: latency-svc-stm2z May 20 23:52:51.830: INFO: Got endpoints: latency-svc-d9682 [1.287083755s] May 20 23:52:51.866: INFO: Got endpoints: latency-svc-stm2z [1.246380764s] May 20 23:52:51.956: INFO: Created: latency-svc-ppt8n May 20 23:52:51.962: INFO: Got endpoints: latency-svc-ppt8n [1.237223723s] May 20 23:52:51.999: INFO: Created: latency-svc-rvrhr May 20 23:52:52.008: INFO: Got endpoints: latency-svc-rvrhr [1.244087926s] May 20 23:52:52.124: INFO: Created: latency-svc-8p2pg May 20 23:52:52.151: INFO: Got endpoints: latency-svc-8p2pg [1.284996966s] May 20 23:52:52.181: INFO: Created: latency-svc-k5s86 May 20 23:52:52.195: INFO: Got endpoints: latency-svc-k5s86 [1.293063588s] May 20 23:52:52.291: INFO: Created: latency-svc-2hl7s May 20 23:52:52.303: INFO: Got endpoints: latency-svc-2hl7s [1.358040858s] May 20 23:52:52.355: INFO: Created: latency-svc-p84ql May 20 23:52:52.379: INFO: Got endpoints: latency-svc-p84ql [1.332494477s] May 20 23:52:52.436: INFO: Created: latency-svc-fxrg2 May 20 23:52:52.448: INFO: Got endpoints: latency-svc-fxrg2 [1.352763266s] May 20 23:52:52.477: INFO: Created: latency-svc-5mmr8 May 20 23:52:52.490: INFO: Got endpoints: latency-svc-5mmr8 [1.322686786s] May 20 23:52:52.585: INFO: Created: latency-svc-fjcq4 May 20 23:52:52.613: INFO: Got endpoints: latency-svc-fjcq4 [1.36170498s] May 20 23:52:52.643: INFO: Created: latency-svc-4rhhm May 20 23:52:52.658: INFO: Got endpoints: latency-svc-4rhhm [1.304540541s] May 20 23:52:52.749: INFO: Created: latency-svc-glwmf May 20 23:52:52.783: INFO: Created: latency-svc-7ms5p May 20 23:52:52.783: INFO: Got endpoints: latency-svc-glwmf [1.323020203s] May 20 23:52:52.797: INFO: Got endpoints: latency-svc-7ms5p [1.293194516s] May 20 23:52:52.841: INFO: Created: latency-svc-xnzn8 May 20 23:52:52.887: INFO: Got endpoints: latency-svc-xnzn8 [1.244507307s] May 20 23:52:52.929: INFO: Created: latency-svc-b5szr May 20 23:52:52.942: INFO: Got endpoints: latency-svc-b5szr [1.112351118s] May 20 23:52:53.042: INFO: Created: latency-svc-gv66x May 20 23:52:53.045: INFO: Got endpoints: latency-svc-gv66x [1.179187858s] May 20 23:52:53.083: INFO: Created: latency-svc-9d57h May 20 23:52:53.098: INFO: Got endpoints: latency-svc-9d57h [1.136117172s] May 20 23:52:53.213: INFO: Created: latency-svc-wmfp7 May 20 23:52:53.255: INFO: Created: latency-svc-7cxg8 May 20 23:52:53.255: INFO: Got endpoints: latency-svc-wmfp7 [1.247148557s] May 20 23:52:53.261: INFO: Got endpoints: latency-svc-7cxg8 [1.109914249s] May 20 23:52:53.288: INFO: Created: latency-svc-5jbwv May 20 23:52:53.311: INFO: Got endpoints: latency-svc-5jbwv [1.116053434s] May 20 23:52:53.375: INFO: Created: latency-svc-nxqcc May 20 23:52:53.386: INFO: Got endpoints: latency-svc-nxqcc [1.083311447s] May 20 23:52:53.441: INFO: Created: latency-svc-rfshd May 20 23:52:53.503: INFO: Got endpoints: latency-svc-rfshd [1.12420197s] May 20 23:52:53.573: INFO: Created: latency-svc-cdt7c May 20 23:52:53.582: INFO: Got endpoints: latency-svc-cdt7c [1.134275173s] May 20 23:52:53.621: INFO: Created: latency-svc-ppmgk May 20 23:52:53.633: INFO: Got endpoints: latency-svc-ppmgk [1.143047237s] May 20 23:52:53.707: INFO: Created: latency-svc-29jg9 May 20 23:52:53.783: INFO: Got endpoints: latency-svc-29jg9 [1.169376544s] May 20 23:52:53.795: INFO: Created: latency-svc-brsw9 May 20 23:52:53.846: INFO: Got endpoints: latency-svc-brsw9 [1.188043227s] May 20 23:52:53.934: INFO: Created: latency-svc-tvmbf May 20 23:52:53.981: INFO: Created: latency-svc-5sk6n May 20 23:52:53.982: INFO: Got endpoints: latency-svc-tvmbf [1.199185582s] May 20 23:52:53.993: INFO: Got endpoints: latency-svc-5sk6n [1.195877038s] May 20 23:52:54.089: INFO: Created: latency-svc-6stsb May 20 23:52:54.096: INFO: Got endpoints: latency-svc-6stsb [1.209222161s] May 20 23:52:54.119: INFO: Created: latency-svc-vd4lh May 20 23:52:54.156: INFO: Got endpoints: latency-svc-vd4lh [1.213664302s] May 20 23:52:54.243: INFO: Created: latency-svc-t7q4m May 20 23:52:54.250: INFO: Got endpoints: latency-svc-t7q4m [1.20499804s] May 20 23:52:54.284: INFO: Created: latency-svc-pm6vh May 20 23:52:54.302: INFO: Got endpoints: latency-svc-pm6vh [1.203287599s] May 20 23:52:54.322: INFO: Created: latency-svc-6jdjj May 20 23:52:54.393: INFO: Got endpoints: latency-svc-6jdjj [1.13801422s] May 20 23:52:54.410: INFO: Created: latency-svc-n64hz May 20 23:52:54.429: INFO: Got endpoints: latency-svc-n64hz [1.16797225s] May 20 23:52:54.451: INFO: Created: latency-svc-jlgwv May 20 23:52:54.464: INFO: Got endpoints: latency-svc-jlgwv [1.152354673s] May 20 23:52:54.491: INFO: Created: latency-svc-qplbl May 20 23:52:54.549: INFO: Got endpoints: latency-svc-qplbl [1.162572805s] May 20 23:52:54.554: INFO: Created: latency-svc-85nmr May 20 23:52:54.563: INFO: Got endpoints: latency-svc-85nmr [1.059945639s] May 20 23:52:54.588: INFO: Created: latency-svc-dsrdg May 20 23:52:54.600: INFO: Got endpoints: latency-svc-dsrdg [1.018500491s] May 20 23:52:54.625: INFO: Created: latency-svc-jft44 May 20 23:52:54.644: INFO: Got endpoints: latency-svc-jft44 [1.010275038s] May 20 23:52:54.717: INFO: Created: latency-svc-gpp77 May 20 23:52:54.726: INFO: Got endpoints: latency-svc-gpp77 [943.843444ms] May 20 23:52:54.770: INFO: Created: latency-svc-ppmvl May 20 23:52:54.806: INFO: Got endpoints: latency-svc-ppmvl [960.592333ms] May 20 23:52:54.875: INFO: Created: latency-svc-x2zn4 May 20 23:52:54.889: INFO: Got endpoints: latency-svc-x2zn4 [907.344096ms] May 20 23:52:54.911: INFO: Created: latency-svc-sw2rj May 20 23:52:54.926: INFO: Got endpoints: latency-svc-sw2rj [932.489276ms] May 20 23:52:54.948: INFO: Created: latency-svc-d87nv May 20 23:52:54.998: INFO: Got endpoints: latency-svc-d87nv [901.314511ms] May 20 23:52:55.015: INFO: Created: latency-svc-t2p5k May 20 23:52:55.035: INFO: Got endpoints: latency-svc-t2p5k [878.481717ms] May 20 23:52:55.063: INFO: Created: latency-svc-7tll8 May 20 23:52:55.082: INFO: Got endpoints: latency-svc-7tll8 [831.95904ms] May 20 23:52:55.147: INFO: Created: latency-svc-f2krv May 20 23:52:55.154: INFO: Got endpoints: latency-svc-f2krv [852.385029ms] May 20 23:52:55.175: INFO: Created: latency-svc-kjxhm May 20 23:52:55.209: INFO: Got endpoints: latency-svc-kjxhm [815.636564ms] May 20 23:52:55.299: INFO: Created: latency-svc-bln5p May 20 23:52:55.301: INFO: Got endpoints: latency-svc-bln5p [871.457999ms] May 20 23:52:55.350: INFO: Created: latency-svc-kjmps May 20 23:52:55.372: INFO: Got endpoints: latency-svc-kjmps [907.900602ms] May 20 23:52:55.391: INFO: Created: latency-svc-wc69q May 20 23:52:55.460: INFO: Got endpoints: latency-svc-wc69q [911.219437ms] May 20 23:52:55.469: INFO: Created: latency-svc-l2m9r May 20 23:52:55.490: INFO: Got endpoints: latency-svc-l2m9r [926.536781ms] May 20 23:52:55.523: INFO: Created: latency-svc-29q2m May 20 23:52:55.540: INFO: Got endpoints: latency-svc-29q2m [940.049086ms] May 20 23:52:55.620: INFO: Created: latency-svc-8npcl May 20 23:52:55.630: INFO: Got endpoints: latency-svc-8npcl [986.237677ms] May 20 23:52:55.676: INFO: Created: latency-svc-zk9wm May 20 23:52:55.685: INFO: Got endpoints: latency-svc-zk9wm [958.466472ms] May 20 23:52:55.709: INFO: Created: latency-svc-gltc4 May 20 23:52:55.812: INFO: Got endpoints: latency-svc-gltc4 [1.005765189s] May 20 23:52:55.920: INFO: Created: latency-svc-b4gx7 May 20 23:52:55.996: INFO: Got endpoints: latency-svc-b4gx7 [1.106971143s] May 20 23:52:56.002: INFO: Created: latency-svc-przdg May 20 23:52:56.027: INFO: Got endpoints: latency-svc-przdg [1.101476344s] May 20 23:52:56.106: INFO: Created: latency-svc-wcz2z May 20 23:52:56.109: INFO: Got endpoints: latency-svc-wcz2z [1.111225853s] May 20 23:52:56.153: INFO: Created: latency-svc-w5tqf May 20 23:52:56.165: INFO: Got endpoints: latency-svc-w5tqf [1.130524664s] May 20 23:52:56.190: INFO: Created: latency-svc-jxlkq May 20 23:52:56.244: INFO: Got endpoints: latency-svc-jxlkq [1.161961109s] May 20 23:52:56.257: INFO: Created: latency-svc-7xc59 May 20 23:52:56.274: INFO: Got endpoints: latency-svc-7xc59 [1.119553811s] May 20 23:52:56.299: INFO: Created: latency-svc-mmmdr May 20 23:52:56.318: INFO: Got endpoints: latency-svc-mmmdr [1.109020297s] May 20 23:52:56.399: INFO: Created: latency-svc-q8k96 May 20 23:52:56.403: INFO: Got endpoints: latency-svc-q8k96 [1.102040724s] May 20 23:52:56.429: INFO: Created: latency-svc-lj9wg May 20 23:52:56.443: INFO: Got endpoints: latency-svc-lj9wg [1.070999466s] May 20 23:52:56.461: INFO: Created: latency-svc-x7zmt May 20 23:52:56.480: INFO: Got endpoints: latency-svc-x7zmt [1.019778222s] May 20 23:52:56.555: INFO: Created: latency-svc-8tvjj May 20 23:52:56.596: INFO: Got endpoints: latency-svc-8tvjj [1.106300169s] May 20 23:52:56.645: INFO: Created: latency-svc-4pzc8 May 20 23:52:56.734: INFO: Got endpoints: latency-svc-4pzc8 [1.193747877s] May 20 23:52:56.737: INFO: Created: latency-svc-m98jm May 20 23:52:56.744: INFO: Got endpoints: latency-svc-m98jm [1.113679726s] May 20 23:52:56.767: INFO: Created: latency-svc-dp2dz May 20 23:52:56.797: INFO: Got endpoints: latency-svc-dp2dz [1.112341833s] May 20 23:52:56.827: INFO: Created: latency-svc-gx8kw May 20 23:52:56.903: INFO: Got endpoints: latency-svc-gx8kw [1.090677559s] May 20 23:52:56.904: INFO: Created: latency-svc-4gd6t May 20 23:52:56.917: INFO: Got endpoints: latency-svc-4gd6t [920.490209ms] May 20 23:52:56.963: INFO: Created: latency-svc-x5cdg May 20 23:52:56.976: INFO: Got endpoints: latency-svc-x5cdg [949.114334ms] May 20 23:52:56.999: INFO: Created: latency-svc-9tqzc May 20 23:52:57.063: INFO: Got endpoints: latency-svc-9tqzc [954.345698ms] May 20 23:52:57.066: INFO: Created: latency-svc-77p5t May 20 23:52:57.079: INFO: Got endpoints: latency-svc-77p5t [913.443433ms] May 20 23:52:57.103: INFO: Created: latency-svc-v6xlp May 20 23:52:57.121: INFO: Got endpoints: latency-svc-v6xlp [877.097667ms] May 20 23:52:57.151: INFO: Created: latency-svc-szpcl May 20 23:52:57.220: INFO: Got endpoints: latency-svc-szpcl [946.363877ms] May 20 23:52:57.251: INFO: Created: latency-svc-hpr7t May 20 23:52:57.265: INFO: Got endpoints: latency-svc-hpr7t [947.350538ms] May 20 23:52:57.314: INFO: Created: latency-svc-fbklg May 20 23:52:57.399: INFO: Got endpoints: latency-svc-fbklg [996.091555ms] May 20 23:52:57.402: INFO: Created: latency-svc-4cgs4 May 20 23:52:57.416: INFO: Got endpoints: latency-svc-4cgs4 [973.355927ms] May 20 23:52:57.440: INFO: Created: latency-svc-6gbls May 20 23:52:57.464: INFO: Got endpoints: latency-svc-6gbls [983.616755ms] May 20 23:52:57.494: INFO: Created: latency-svc-b4hcm May 20 23:52:57.560: INFO: Got endpoints: latency-svc-b4hcm [964.125517ms] May 20 23:52:57.562: INFO: Created: latency-svc-jqcv4 May 20 23:52:57.572: INFO: Got endpoints: latency-svc-jqcv4 [838.021732ms] May 20 23:52:57.593: INFO: Created: latency-svc-2hl57 May 20 23:52:57.609: INFO: Got endpoints: latency-svc-2hl57 [865.408413ms] May 20 23:52:57.632: INFO: Created: latency-svc-657rr May 20 23:52:57.740: INFO: Got endpoints: latency-svc-657rr [943.103934ms] May 20 23:52:57.743: INFO: Created: latency-svc-g6g57 May 20 23:52:57.766: INFO: Got endpoints: latency-svc-g6g57 [862.971661ms] May 20 23:52:57.786: INFO: Created: latency-svc-6l9kq May 20 23:52:57.920: INFO: Got endpoints: latency-svc-6l9kq [1.003319859s] May 20 23:52:57.922: INFO: Created: latency-svc-rgxxk May 20 23:52:57.934: INFO: Got endpoints: latency-svc-rgxxk [957.162189ms] May 20 23:52:57.966: INFO: Created: latency-svc-tl98d May 20 23:52:57.982: INFO: Got endpoints: latency-svc-tl98d [918.971503ms] May 20 23:52:58.013: INFO: Created: latency-svc-225rc May 20 23:52:58.111: INFO: Got endpoints: latency-svc-225rc [1.032495919s] May 20 23:52:58.124: INFO: Created: latency-svc-t8987 May 20 23:52:58.132: INFO: Got endpoints: latency-svc-t8987 [1.01069743s] May 20 23:52:58.170: INFO: Created: latency-svc-xgsgr May 20 23:52:58.199: INFO: Got endpoints: latency-svc-xgsgr [979.101623ms] May 20 23:52:58.279: INFO: Created: latency-svc-wwtjc May 20 23:52:58.309: INFO: Got endpoints: latency-svc-wwtjc [1.043964227s] May 20 23:52:58.310: INFO: Created: latency-svc-vt5l6 May 20 23:52:58.340: INFO: Got endpoints: latency-svc-vt5l6 [941.178674ms] May 20 23:52:58.367: INFO: Created: latency-svc-zb7hz May 20 23:52:58.429: INFO: Got endpoints: latency-svc-zb7hz [1.013223311s] May 20 23:52:58.432: INFO: Created: latency-svc-r8mqs May 20 23:52:58.440: INFO: Got endpoints: latency-svc-r8mqs [976.309834ms] May 20 23:52:58.466: INFO: Created: latency-svc-kwrrd May 20 23:52:58.483: INFO: Got endpoints: latency-svc-kwrrd [922.153985ms] May 20 23:52:58.507: INFO: Created: latency-svc-j4mt9 May 20 23:52:58.524: INFO: Got endpoints: latency-svc-j4mt9 [951.930387ms] May 20 23:52:58.603: INFO: Created: latency-svc-wnz5c May 20 23:52:58.615: INFO: Got endpoints: latency-svc-wnz5c [1.005545089s] May 20 23:52:58.649: INFO: Created: latency-svc-kb9pf May 20 23:52:58.662: INFO: Got endpoints: latency-svc-kb9pf [921.719495ms] May 20 23:52:58.699: INFO: Created: latency-svc-c2c85 May 20 23:52:58.746: INFO: Got endpoints: latency-svc-c2c85 [980.117616ms] May 20 23:52:58.759: INFO: Created: latency-svc-779xr May 20 23:52:58.777: INFO: Got endpoints: latency-svc-779xr [856.721262ms] May 20 23:52:58.799: INFO: Created: latency-svc-7dn64 May 20 23:52:58.814: INFO: Got endpoints: latency-svc-7dn64 [879.983342ms] May 20 23:52:58.836: INFO: Created: latency-svc-5z7dk May 20 23:52:58.896: INFO: Got endpoints: latency-svc-5z7dk [914.013394ms] May 20 23:52:58.927: INFO: Created: latency-svc-sjvtk May 20 23:52:58.944: INFO: Got endpoints: latency-svc-sjvtk [832.305049ms] May 20 23:52:58.964: INFO: Created: latency-svc-kr22t May 20 23:52:58.986: INFO: Got endpoints: latency-svc-kr22t [853.854502ms] May 20 23:52:59.047: INFO: Created: latency-svc-k7hb9 May 20 23:52:59.080: INFO: Got endpoints: latency-svc-k7hb9 [881.166662ms] May 20 23:52:59.082: INFO: Created: latency-svc-dvv9x May 20 23:52:59.102: INFO: Got endpoints: latency-svc-dvv9x [792.137418ms] May 20 23:52:59.132: INFO: Created: latency-svc-gx2pv May 20 23:52:59.207: INFO: Got endpoints: latency-svc-gx2pv [867.208702ms] May 20 23:52:59.211: INFO: Created: latency-svc-l4bvd May 20 23:52:59.251: INFO: Got endpoints: latency-svc-l4bvd [821.10653ms] May 20 23:52:59.294: INFO: Created: latency-svc-7ft55 May 20 23:52:59.363: INFO: Got endpoints: latency-svc-7ft55 [922.768768ms] May 20 23:52:59.405: INFO: Created: latency-svc-jnwb4 May 20 23:52:59.419: INFO: Got endpoints: latency-svc-jnwb4 [936.071719ms] May 20 23:52:59.459: INFO: Created: latency-svc-pg5xq May 20 23:52:59.519: INFO: Got endpoints: latency-svc-pg5xq [994.516151ms] May 20 23:52:59.523: INFO: Created: latency-svc-q85b7 May 20 23:52:59.535: INFO: Got endpoints: latency-svc-q85b7 [920.078198ms] May 20 23:52:59.557: INFO: Created: latency-svc-v9mmv May 20 23:52:59.588: INFO: Got endpoints: latency-svc-v9mmv [925.375273ms] May 20 23:52:59.609: INFO: Created: latency-svc-bmn7g May 20 23:52:59.693: INFO: Got endpoints: latency-svc-bmn7g [946.955031ms] May 20 23:52:59.749: INFO: Created: latency-svc-kblmp May 20 23:52:59.768: INFO: Got endpoints: latency-svc-kblmp [991.1507ms] May 20 23:52:59.860: INFO: Created: latency-svc-ktzpr May 20 23:52:59.879: INFO: Got endpoints: latency-svc-ktzpr [1.065499209s] May 20 23:52:59.918: INFO: Created: latency-svc-9l4j2 May 20 23:52:59.942: INFO: Got endpoints: latency-svc-9l4j2 [1.045974061s] May 20 23:53:00.016: INFO: Created: latency-svc-2ctpk May 20 23:53:00.035: INFO: Got endpoints: latency-svc-2ctpk [1.090885515s] May 20 23:53:00.065: INFO: Created: latency-svc-7tn46 May 20 23:53:00.081: INFO: Got endpoints: latency-svc-7tn46 [1.095677119s] May 20 23:53:00.104: INFO: Created: latency-svc-z6lb9 May 20 23:53:00.196: INFO: Got endpoints: latency-svc-z6lb9 [1.115340586s] May 20 23:53:00.199: INFO: Created: latency-svc-shz85 May 20 23:53:00.233: INFO: Got endpoints: latency-svc-shz85 [1.131212131s] May 20 23:53:00.275: INFO: Created: latency-svc-9bm2c May 20 23:53:00.369: INFO: Got endpoints: latency-svc-9bm2c [1.162112921s] May 20 23:53:00.371: INFO: Created: latency-svc-bkmt5 May 20 23:53:00.376: INFO: Got endpoints: latency-svc-bkmt5 [1.125215254s] May 20 23:53:00.425: INFO: Created: latency-svc-rdhvc May 20 23:53:00.443: INFO: Got endpoints: latency-svc-rdhvc [1.079519227s] May 20 23:53:00.519: INFO: Created: latency-svc-g9lnm May 20 23:53:00.521: INFO: Got endpoints: latency-svc-g9lnm [1.102579451s] May 20 23:53:00.554: INFO: Created: latency-svc-tptj5 May 20 23:53:00.569: INFO: Got endpoints: latency-svc-tptj5 [1.049999609s] May 20 23:53:00.589: INFO: Created: latency-svc-85452 May 20 23:53:00.599: INFO: Got endpoints: latency-svc-85452 [1.064247377s] May 20 23:53:00.680: INFO: Created: latency-svc-nm5qg May 20 23:53:00.704: INFO: Got endpoints: latency-svc-nm5qg [1.116381478s] May 20 23:53:00.727: INFO: Created: latency-svc-vrqcm May 20 23:53:00.759: INFO: Got endpoints: latency-svc-vrqcm [1.065498127s] May 20 23:53:00.830: INFO: Created: latency-svc-csv57 May 20 23:53:00.858: INFO: Got endpoints: latency-svc-csv57 [1.089258912s] May 20 23:53:00.858: INFO: Created: latency-svc-hdf84 May 20 23:53:00.875: INFO: Got endpoints: latency-svc-hdf84 [996.221438ms] May 20 23:53:00.901: INFO: Created: latency-svc-6w9qb May 20 23:53:00.925: INFO: Got endpoints: latency-svc-6w9qb [982.975333ms] May 20 23:53:00.989: INFO: Created: latency-svc-77tq4 May 20 23:53:00.998: INFO: Got endpoints: latency-svc-77tq4 [963.677585ms] May 20 23:53:01.025: INFO: Created: latency-svc-b29ln May 20 23:53:01.035: INFO: Got endpoints: latency-svc-b29ln [953.198853ms] May 20 23:53:01.062: INFO: Created: latency-svc-tchg4 May 20 23:53:01.071: INFO: Got endpoints: latency-svc-tchg4 [874.740151ms] May 20 23:53:01.129: INFO: Created: latency-svc-znpjs May 20 23:53:01.159: INFO: Got endpoints: latency-svc-znpjs [926.462627ms] May 20 23:53:01.196: INFO: Created: latency-svc-wwbpw May 20 23:53:01.209: INFO: Got endpoints: latency-svc-wwbpw [839.947945ms] May 20 23:53:01.303: INFO: Created: latency-svc-l6ctl May 20 23:53:01.334: INFO: Got endpoints: latency-svc-l6ctl [957.967989ms] May 20 23:53:01.335: INFO: Created: latency-svc-xpdfz May 20 23:53:01.347: INFO: Got endpoints: latency-svc-xpdfz [904.606662ms] May 20 23:53:01.376: INFO: Created: latency-svc-vstx4 May 20 23:53:01.390: INFO: Got endpoints: latency-svc-vstx4 [868.414217ms] May 20 23:53:01.471: INFO: Created: latency-svc-rmd8d May 20 23:53:01.480: INFO: Got endpoints: latency-svc-rmd8d [911.080164ms] May 20 23:53:01.505: INFO: Created: latency-svc-lbrsw May 20 23:53:01.532: INFO: Got endpoints: latency-svc-lbrsw [932.60251ms] May 20 23:53:01.562: INFO: Created: latency-svc-zb9p6 May 20 23:53:01.633: INFO: Got endpoints: latency-svc-zb9p6 [929.161087ms] May 20 23:53:01.662: INFO: Created: latency-svc-shsxn May 20 23:53:01.680: INFO: Got endpoints: latency-svc-shsxn [921.514002ms] May 20 23:53:01.807: INFO: Created: latency-svc-tjm6l May 20 23:53:01.823: INFO: Got endpoints: latency-svc-tjm6l [965.24899ms] May 20 23:53:01.962: INFO: Created: latency-svc-267fr May 20 23:53:01.966: INFO: Got endpoints: latency-svc-267fr [1.090656582s] May 20 23:53:02.006: INFO: Created: latency-svc-bq8l2 May 20 23:53:02.022: INFO: Got endpoints: latency-svc-bq8l2 [1.096544197s] May 20 23:53:02.051: INFO: Created: latency-svc-x956p May 20 23:53:02.131: INFO: Got endpoints: latency-svc-x956p [1.132267226s] May 20 23:53:02.132: INFO: Created: latency-svc-mjfh9 May 20 23:53:02.142: INFO: Got endpoints: latency-svc-mjfh9 [1.106862359s] May 20 23:53:02.168: INFO: Created: latency-svc-6jw86 May 20 23:53:02.182: INFO: Got endpoints: latency-svc-6jw86 [1.111872289s] May 20 23:53:02.220: INFO: Created: latency-svc-cpnlr May 20 23:53:02.315: INFO: Got endpoints: latency-svc-cpnlr [1.155614793s] May 20 23:53:02.317: INFO: Created: latency-svc-sj2ht May 20 23:53:02.327: INFO: Got endpoints: latency-svc-sj2ht [1.117751115s] May 20 23:53:02.354: INFO: Created: latency-svc-8xwsg May 20 23:53:02.363: INFO: Got endpoints: latency-svc-8xwsg [1.029531425s] May 20 23:53:02.383: INFO: Created: latency-svc-bmzzl May 20 23:53:02.406: INFO: Got endpoints: latency-svc-bmzzl [1.05824379s] May 20 23:53:02.474: INFO: Created: latency-svc-mqs4k May 20 23:53:02.477: INFO: Got endpoints: latency-svc-mqs4k [1.086906336s] May 20 23:53:02.510: INFO: Created: latency-svc-cx5wh May 20 23:53:02.520: INFO: Got endpoints: latency-svc-cx5wh [1.040403265s] May 20 23:53:02.539: INFO: Created: latency-svc-g8hcw May 20 23:53:02.551: INFO: Got endpoints: latency-svc-g8hcw [1.018836624s] May 20 23:53:02.603: INFO: Created: latency-svc-4ttvn May 20 23:53:02.621: INFO: Got endpoints: latency-svc-4ttvn [987.774805ms] May 20 23:53:02.658: INFO: Created: latency-svc-hj46c May 20 23:53:02.672: INFO: Got endpoints: latency-svc-hj46c [991.593772ms] May 20 23:53:02.759: INFO: Created: latency-svc-q4kc9 May 20 23:53:02.767: INFO: Got endpoints: latency-svc-q4kc9 [943.920377ms] May 20 23:53:02.801: INFO: Created: latency-svc-8h8ht May 20 23:53:02.832: INFO: Got endpoints: latency-svc-8h8ht [865.653455ms] May 20 23:53:02.914: INFO: Created: latency-svc-hmn2j May 20 23:53:02.942: INFO: Got endpoints: latency-svc-hmn2j [919.571538ms] May 20 23:53:02.966: INFO: Created: latency-svc-h78gb May 20 23:53:02.978: INFO: Got endpoints: latency-svc-h78gb [847.732267ms] May 20 23:53:03.000: INFO: Created: latency-svc-t4l6r May 20 23:53:03.058: INFO: Got endpoints: latency-svc-t4l6r [916.264413ms] May 20 23:53:03.060: INFO: Created: latency-svc-hdgvr May 20 23:53:03.074: INFO: Got endpoints: latency-svc-hdgvr [891.802453ms] May 20 23:53:03.074: INFO: Latencies: [62.324944ms 130.46636ms 328.162273ms 356.248851ms 418.482125ms 500.948843ms 615.075997ms 670.460554ms 705.139468ms 783.620462ms 792.137418ms 815.636564ms 821.10653ms 828.746707ms 831.95904ms 832.305049ms 838.021732ms 839.947945ms 847.732267ms 852.385029ms 853.854502ms 856.721262ms 862.971661ms 865.408413ms 865.653455ms 867.208702ms 868.414217ms 871.457999ms 874.740151ms 877.097667ms 878.481717ms 879.983342ms 881.166662ms 891.802453ms 901.314511ms 904.606662ms 907.344096ms 907.900602ms 911.080164ms 911.219437ms 913.443433ms 914.013394ms 916.264413ms 918.971503ms 919.571538ms 919.758195ms 920.078198ms 920.490209ms 921.514002ms 921.719495ms 922.153985ms 922.768768ms 925.375273ms 926.462627ms 926.536781ms 929.161087ms 932.489276ms 932.60251ms 936.071719ms 940.049086ms 941.178674ms 943.103934ms 943.843444ms 943.920377ms 946.363877ms 946.955031ms 947.350538ms 949.114334ms 951.930387ms 953.198853ms 954.345698ms 957.162189ms 957.967989ms 958.466472ms 960.592333ms 963.677585ms 964.125517ms 965.24899ms 967.021237ms 973.355927ms 976.309834ms 979.101623ms 980.117616ms 982.975333ms 983.616755ms 986.237677ms 987.774805ms 991.1507ms 991.593772ms 994.516151ms 996.091555ms 996.221438ms 1.003319859s 1.005545089s 1.005765189s 1.010275038s 1.01069743s 1.013223311s 1.016355938s 1.018500491s 1.018836624s 1.019778222s 1.029531425s 1.032495919s 1.033676757s 1.040403265s 1.042045072s 1.043964227s 1.045974061s 1.049482662s 1.049999609s 1.053047312s 1.05824379s 1.059945639s 1.062946673s 1.064247377s 1.064953286s 1.06507603s 1.065205044s 1.065498127s 1.065499209s 1.068033337s 1.070999466s 1.079519227s 1.083311447s 1.086268026s 1.086906336s 1.089258912s 1.090656582s 1.090677559s 1.090885515s 1.091187778s 1.091867961s 1.093290153s 1.095677119s 1.096544197s 1.101476344s 1.102040724s 1.102579451s 1.106300169s 1.106862359s 1.106971143s 1.108564672s 1.109020297s 1.109914249s 1.111225853s 1.111872289s 1.112341833s 1.112351118s 1.113679726s 1.115340586s 1.116053434s 1.116381478s 1.117751115s 1.119553811s 1.121054556s 1.12420197s 1.125215254s 1.130524664s 1.131212131s 1.132267226s 1.134275173s 1.136117172s 1.13801422s 1.143047237s 1.152354673s 1.152716843s 1.155614793s 1.161961109s 1.162112921s 1.162572805s 1.16797225s 1.168422021s 1.169376544s 1.179187858s 1.188043227s 1.192796101s 1.193747877s 1.195877038s 1.199185582s 1.203287599s 1.20499804s 1.209222161s 1.213664302s 1.237223723s 1.244087926s 1.244507307s 1.246380764s 1.247148557s 1.284996966s 1.287083755s 1.293063588s 1.293194516s 1.304540541s 1.322686786s 1.323020203s 1.332494477s 1.352763266s 1.358040858s 1.36170498s] May 20 23:53:03.075: INFO: 50 %ile: 1.018836624s May 20 23:53:03.075: INFO: 90 %ile: 1.203287599s May 20 23:53:03.075: INFO: 99 %ile: 1.358040858s May 20 23:53:03.075: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:53:03.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6167" for this suite. • [SLOW TEST:18.212 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":66,"skipped":1189,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:53:03.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 20 23:53:03.141: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 20 23:53:03.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5971' May 20 23:53:03.453: INFO: stderr: "" May 20 23:53:03.453: INFO: stdout: "service/agnhost-slave created\n" May 20 23:53:03.453: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 20 23:53:03.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5971' May 20 23:53:03.799: INFO: stderr: "" May 20 23:53:03.799: INFO: stdout: "service/agnhost-master created\n" May 20 23:53:03.799: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 20 23:53:03.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5971' May 20 23:53:04.165: INFO: stderr: "" May 20 23:53:04.165: INFO: stdout: "service/frontend created\n" May 20 23:53:04.165: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 20 23:53:04.165: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5971' May 20 23:53:04.406: INFO: stderr: "" May 20 23:53:04.406: INFO: stdout: "deployment.apps/frontend created\n" May 20 23:53:04.406: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 20 23:53:04.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5971' May 20 23:53:05.216: INFO: stderr: "" May 20 23:53:05.216: INFO: stdout: "deployment.apps/agnhost-master created\n" May 20 23:53:05.216: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 20 23:53:05.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5971' May 20 23:53:05.516: INFO: stderr: "" May 20 23:53:05.516: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 20 23:53:05.516: INFO: Waiting for all frontend pods to be Running. May 20 23:53:15.567: INFO: Waiting for frontend to serve content. May 20 23:53:15.578: INFO: Trying to add a new entry to the guestbook. May 20 23:53:15.594: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 20 23:53:15.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5971' May 20 23:53:20.057: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:53:20.057: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 20 23:53:20.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5971' May 20 23:53:20.429: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:53:20.429: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 20 23:53:20.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5971' May 20 23:53:20.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:53:20.702: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 20 23:53:20.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5971' May 20 23:53:20.984: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:53:20.984: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 20 23:53:20.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5971' May 20 23:53:21.811: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:53:21.812: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 20 23:53:21.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5971' May 20 23:53:22.620: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:53:22.620: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:53:22.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5971" for this suite. • [SLOW TEST:20.278 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":67,"skipped":1191,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:53:23.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 20 23:53:32.822: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:32.852: INFO: Pod pod-with-prestop-exec-hook still exists May 20 23:53:34.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:34.856: INFO: Pod pod-with-prestop-exec-hook still exists May 20 23:53:36.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:36.921: INFO: Pod pod-with-prestop-exec-hook still exists May 20 23:53:38.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:38.867: INFO: Pod pod-with-prestop-exec-hook still exists May 20 23:53:40.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:40.890: INFO: Pod pod-with-prestop-exec-hook still exists May 20 23:53:42.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:42.862: INFO: Pod pod-with-prestop-exec-hook still exists May 20 23:53:44.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:44.903: INFO: Pod pod-with-prestop-exec-hook still exists May 20 23:53:46.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 20 23:53:46.856: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:53:46.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1126" for this suite. • [SLOW TEST:23.489 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":68,"skipped":1192,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:53:46.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:53:47.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4322" for this suite. STEP: Destroying namespace "nspatchtest-c299aed2-0e25-4b1e-89f3-1316aba94a2b-7377" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":69,"skipped":1197,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:53:47.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 20 23:53:47.192: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:47.243: INFO: Number of nodes with available pods: 0 May 20 23:53:47.243: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:48.248: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:48.252: INFO: Number of nodes with available pods: 0 May 20 23:53:48.252: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:49.310: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:49.544: INFO: Number of nodes with available pods: 0 May 20 23:53:49.544: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:50.250: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:50.253: INFO: Number of nodes with available pods: 0 May 20 23:53:50.253: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:51.249: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:51.253: INFO: Number of nodes with available pods: 1 May 20 23:53:51.253: INFO: Node latest-worker2 is running more than one daemon pod May 20 23:53:52.251: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:52.271: INFO: Number of nodes with available pods: 2 May 20 23:53:52.272: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 20 23:53:52.327: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:52.349: INFO: Number of nodes with available pods: 1 May 20 23:53:52.349: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:53.380: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:53.420: INFO: Number of nodes with available pods: 1 May 20 23:53:53.420: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:54.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:54.368: INFO: Number of nodes with available pods: 1 May 20 23:53:54.368: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:55.353: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:55.357: INFO: Number of nodes with available pods: 1 May 20 23:53:55.357: INFO: Node latest-worker is running more than one daemon pod May 20 23:53:56.354: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:53:56.357: INFO: Number of nodes with available pods: 2 May 20 23:53:56.357: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6592, will wait for the garbage collector to delete the pods May 20 23:53:56.444: INFO: Deleting DaemonSet.extensions daemon-set took: 29.386093ms May 20 23:53:56.744: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.191539ms May 20 23:54:05.351: INFO: Number of nodes with available pods: 0 May 20 23:54:05.351: INFO: Number of running nodes: 0, number of available pods: 0 May 20 23:54:05.353: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6592/daemonsets","resourceVersion":"6347350"},"items":null} May 20 23:54:05.355: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6592/pods","resourceVersion":"6347350"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:54:05.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6592" for this suite. • [SLOW TEST:18.322 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":70,"skipped":1207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:54:05.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 23:54:05.992: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 23:54:08.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615646, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615646, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615646, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615645, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 23:54:11.365: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:54:11.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8410" for this suite. STEP: Destroying namespace "webhook-8410-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.723 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":71,"skipped":1234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:54:12.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 20 23:54:16.234: INFO: &Pod{ObjectMeta:{send-events-9b43320a-465a-48c3-a9c5-56d4f9ed7829 events-6958 /api/v1/namespaces/events-6958/pods/send-events-9b43320a-465a-48c3-a9c5-56d4f9ed7829 0f579217-b5d9-4232-8ab4-83c1ee1a47bb 6347491 0 2020-05-20 23:54:12 +0000 UTC map[name:foo time:198240454] map[] [] [] [{e2e.test Update v1 2020-05-20 23:54:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-20 23:54:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fjd9q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fjd9q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fjd9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:54:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:54:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:54:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-20 23:54:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.103,StartTime:2020-05-20 23:54:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-20 23:54:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://060f100ca7d8d9756e9d8872e3ba38a3b8043dff2ce1721750b341f50e56959e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 20 23:54:18.239: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 20 23:54:20.244: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:54:20.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6958" for this suite. • [SLOW TEST:8.188 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":72,"skipped":1282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:54:20.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:54:20.423: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 20 23:54:22.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 create -f -' May 20 23:54:25.861: INFO: stderr: "" May 20 23:54:25.861: INFO: stdout: "e2e-test-crd-publish-openapi-1805-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 20 23:54:25.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 delete e2e-test-crd-publish-openapi-1805-crds test-foo' May 20 23:54:25.973: INFO: stderr: "" May 20 23:54:25.973: INFO: stdout: "e2e-test-crd-publish-openapi-1805-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 20 23:54:25.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 apply -f -' May 20 23:54:26.261: INFO: stderr: "" May 20 23:54:26.261: INFO: stdout: "e2e-test-crd-publish-openapi-1805-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 20 23:54:26.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 delete e2e-test-crd-publish-openapi-1805-crds test-foo' May 20 23:54:26.373: INFO: stderr: "" May 20 23:54:26.373: INFO: stdout: "e2e-test-crd-publish-openapi-1805-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 20 23:54:26.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 create -f -' May 20 23:54:26.644: INFO: rc: 1 May 20 23:54:26.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 apply -f -' May 20 23:54:26.903: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 20 23:54:26.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 create -f -' May 20 23:54:27.170: INFO: rc: 1 May 20 23:54:27.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3028 apply -f -' May 20 23:54:27.422: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 20 23:54:27.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1805-crds' May 20 23:54:27.715: INFO: stderr: "" May 20 23:54:27.715: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1805-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 20 23:54:27.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1805-crds.metadata' May 20 23:54:27.959: INFO: stderr: "" May 20 23:54:27.959: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1805-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 20 23:54:27.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1805-crds.spec' May 20 23:54:28.201: INFO: stderr: "" May 20 23:54:28.201: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1805-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 20 23:54:28.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1805-crds.spec.bars' May 20 23:54:28.444: INFO: stderr: "" May 20 23:54:28.444: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1805-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 20 23:54:28.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1805-crds.spec.bars2' May 20 23:54:28.725: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:54:31.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3028" for this suite. • [SLOW TEST:11.421 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":73,"skipped":1310,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:54:31.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 20 23:54:31.802: INFO: Create a RollingUpdate DaemonSet May 20 23:54:31.806: INFO: Check that daemon pods launch on every node of the cluster May 20 23:54:31.817: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:31.859: INFO: Number of nodes with available pods: 0 May 20 23:54:31.859: INFO: Node latest-worker is running more than one daemon pod May 20 23:54:32.864: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:32.867: INFO: Number of nodes with available pods: 0 May 20 23:54:32.867: INFO: Node latest-worker is running more than one daemon pod May 20 23:54:33.864: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:33.868: INFO: Number of nodes with available pods: 0 May 20 23:54:33.868: INFO: Node latest-worker is running more than one daemon pod May 20 23:54:34.862: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:34.865: INFO: Number of nodes with available pods: 0 May 20 23:54:34.865: INFO: Node latest-worker is running more than one daemon pod May 20 23:54:35.864: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:35.867: INFO: Number of nodes with available pods: 0 May 20 23:54:35.867: INFO: Node latest-worker is running more than one daemon pod May 20 23:54:36.863: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:36.867: INFO: Number of nodes with available pods: 2 May 20 23:54:36.867: INFO: Number of running nodes: 2, number of available pods: 2 May 20 23:54:36.867: INFO: Update the DaemonSet to trigger a rollout May 20 23:54:36.909: INFO: Updating DaemonSet daemon-set May 20 23:54:45.048: INFO: Roll back the DaemonSet before rollout is complete May 20 23:54:45.076: INFO: Updating DaemonSet daemon-set May 20 23:54:45.076: INFO: Make sure DaemonSet rollback is complete May 20 23:54:45.086: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:45.086: INFO: Pod daemon-set-npv56 is not available May 20 23:54:45.107: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:46.119: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:46.119: INFO: Pod daemon-set-npv56 is not available May 20 23:54:46.123: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:47.424: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:47.424: INFO: Pod daemon-set-npv56 is not available May 20 23:54:47.440: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:48.112: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:48.113: INFO: Pod daemon-set-npv56 is not available May 20 23:54:48.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:49.112: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:49.112: INFO: Pod daemon-set-npv56 is not available May 20 23:54:49.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:50.113: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:50.113: INFO: Pod daemon-set-npv56 is not available May 20 23:54:50.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:51.112: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:51.112: INFO: Pod daemon-set-npv56 is not available May 20 23:54:51.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:52.113: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:52.113: INFO: Pod daemon-set-npv56 is not available May 20 23:54:52.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:53.131: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:53.131: INFO: Pod daemon-set-npv56 is not available May 20 23:54:53.135: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:54.112: INFO: Wrong image for pod: daemon-set-npv56. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 20 23:54:54.112: INFO: Pod daemon-set-npv56 is not available May 20 23:54:54.116: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 20 23:54:55.111: INFO: Pod daemon-set-2z7dd is not available May 20 23:54:55.115: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3356, will wait for the garbage collector to delete the pods May 20 23:54:55.186: INFO: Deleting DaemonSet.extensions daemon-set took: 13.176211ms May 20 23:54:55.487: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.225033ms May 20 23:55:05.289: INFO: Number of nodes with available pods: 0 May 20 23:55:05.289: INFO: Number of running nodes: 0, number of available pods: 0 May 20 23:55:05.291: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3356/daemonsets","resourceVersion":"6347759"},"items":null} May 20 23:55:05.293: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3356/pods","resourceVersion":"6347759"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:55:05.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3356" for this suite. • [SLOW TEST:33.602 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":74,"skipped":1318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:55:05.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 20 23:55:10.464: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:55:10.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4626" for this suite. • [SLOW TEST:5.341 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":75,"skipped":1345,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:55:10.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 20 23:55:11.465: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 20 23:55:13.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615711, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615711, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615711, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725615711, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 20 23:55:16.535: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 20 23:55:20.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-6152 to-be-attached-pod -i -c=container1' May 20 23:55:21.055: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:55:21.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6152" for this suite. STEP: Destroying namespace "webhook-6152-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.536 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":76,"skipped":1357,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:55:21.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:55:21.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10" in namespace "projected-7322" to be "Succeeded or Failed" May 20 23:55:21.337: INFO: Pod "downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10": Phase="Pending", Reason="", readiness=false. Elapsed: 36.269996ms May 20 23:55:23.341: INFO: Pod "downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039965691s May 20 23:55:25.382: INFO: Pod "downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081147412s STEP: Saw pod success May 20 23:55:25.382: INFO: Pod "downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10" satisfied condition "Succeeded or Failed" May 20 23:55:25.386: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10 container client-container: STEP: delete the pod May 20 23:55:25.460: INFO: Waiting for pod downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10 to disappear May 20 23:55:25.464: INFO: Pod downwardapi-volume-93e3d9a5-2f1c-4ec8-a3f7-b0c5cb01eb10 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:55:25.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7322" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":77,"skipped":1359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:55:25.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 20 23:55:25.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2045' May 20 23:55:25.875: INFO: stderr: "" May 20 23:55:25.875: INFO: stdout: "pod/pause created\n" May 20 23:55:25.875: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 20 23:55:25.875: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2045" to be "running and ready" May 20 23:55:25.878: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.316536ms May 20 23:55:27.915: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039748389s May 20 23:55:29.919: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.043966555s May 20 23:55:29.920: INFO: Pod "pause" satisfied condition "running and ready" May 20 23:55:29.920: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 20 23:55:29.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2045' May 20 23:55:30.034: INFO: stderr: "" May 20 23:55:30.035: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 20 23:55:30.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2045' May 20 23:55:30.143: INFO: stderr: "" May 20 23:55:30.143: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 20 23:55:30.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2045' May 20 23:55:30.264: INFO: stderr: "" May 20 23:55:30.264: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 20 23:55:30.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2045' May 20 23:55:30.361: INFO: stderr: "" May 20 23:55:30.361: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 20 23:55:30.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2045' May 20 23:55:30.507: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 20 23:55:30.507: INFO: stdout: "pod \"pause\" force deleted\n" May 20 23:55:30.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2045' May 20 23:55:30.634: INFO: stderr: "No resources found in kubectl-2045 namespace.\n" May 20 23:55:30.634: INFO: stdout: "" May 20 23:55:30.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2045 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 20 23:55:30.837: INFO: stderr: "" May 20 23:55:30.837: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:55:30.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2045" for this suite. • [SLOW TEST:5.458 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":78,"skipped":1388,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:55:30.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3d08bc44-5bb0-4fbb-9523-c1b0aaaa3286 STEP: Creating a pod to test consume configMaps May 20 23:55:31.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44" in namespace "configmap-1186" to be "Succeeded or Failed" May 20 23:55:31.544: INFO: Pod "pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44": Phase="Pending", Reason="", readiness=false. Elapsed: 267.008404ms May 20 23:55:33.548: INFO: Pod "pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270749105s May 20 23:55:35.551: INFO: Pod "pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.274386625s STEP: Saw pod success May 20 23:55:35.551: INFO: Pod "pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44" satisfied condition "Succeeded or Failed" May 20 23:55:35.554: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44 container configmap-volume-test: STEP: delete the pod May 20 23:55:35.776: INFO: Waiting for pod pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44 to disappear May 20 23:55:35.827: INFO: Pod pod-configmaps-f720a43e-f91f-46a6-89ed-e979f33ddd44 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:55:35.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1186" for this suite. • [SLOW TEST:5.029 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:55:35.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 23:55:40.169: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:55:40.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5058" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":80,"skipped":1433,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:55:40.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2922 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2922 STEP: creating replication controller externalsvc in namespace services-2922 I0520 23:55:40.844007 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2922, replica count: 2 I0520 23:55:43.894474 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:55:46.894728 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 20 23:55:46.990: INFO: Creating new exec pod May 20 23:55:51.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpod5clxd -- /bin/sh -x -c nslookup nodeport-service' May 20 23:55:51.414: INFO: stderr: "I0520 23:55:51.155910 2367 log.go:172] (0xc0006f8160) (0xc0004cd220) Create stream\nI0520 23:55:51.155969 2367 log.go:172] (0xc0006f8160) (0xc0004cd220) Stream added, broadcasting: 1\nI0520 23:55:51.159193 2367 log.go:172] (0xc0006f8160) Reply frame received for 1\nI0520 23:55:51.159248 2367 log.go:172] (0xc0006f8160) (0xc000182dc0) Create stream\nI0520 23:55:51.159275 2367 log.go:172] (0xc0006f8160) (0xc000182dc0) Stream added, broadcasting: 3\nI0520 23:55:51.160483 2367 log.go:172] (0xc0006f8160) Reply frame received for 3\nI0520 23:55:51.160553 2367 log.go:172] (0xc0006f8160) (0xc0000edf40) Create stream\nI0520 23:55:51.160575 2367 log.go:172] (0xc0006f8160) (0xc0000edf40) Stream added, broadcasting: 5\nI0520 23:55:51.162022 2367 log.go:172] (0xc0006f8160) Reply frame received for 5\nI0520 23:55:51.254096 2367 log.go:172] (0xc0006f8160) Data frame received for 5\nI0520 23:55:51.254135 2367 log.go:172] (0xc0000edf40) (5) Data frame handling\nI0520 23:55:51.254163 2367 log.go:172] (0xc0000edf40) (5) Data frame sent\n+ nslookup nodeport-service\nI0520 23:55:51.403380 2367 log.go:172] (0xc0006f8160) Data frame received for 3\nI0520 23:55:51.403413 2367 log.go:172] (0xc000182dc0) (3) Data frame handling\nI0520 23:55:51.403440 2367 log.go:172] (0xc000182dc0) (3) Data frame sent\nI0520 23:55:51.405816 2367 log.go:172] (0xc0006f8160) Data frame received for 3\nI0520 23:55:51.405833 2367 log.go:172] (0xc000182dc0) (3) Data frame handling\nI0520 23:55:51.405841 2367 log.go:172] (0xc000182dc0) (3) Data frame sent\nI0520 23:55:51.406714 2367 log.go:172] (0xc0006f8160) Data frame received for 5\nI0520 23:55:51.406753 2367 log.go:172] (0xc0000edf40) (5) Data frame handling\nI0520 23:55:51.406813 2367 log.go:172] (0xc0006f8160) Data frame received for 3\nI0520 23:55:51.406845 2367 log.go:172] (0xc000182dc0) (3) Data frame handling\nI0520 23:55:51.408484 2367 log.go:172] (0xc0006f8160) Data frame received for 1\nI0520 23:55:51.408520 2367 log.go:172] (0xc0004cd220) (1) Data frame handling\nI0520 23:55:51.408537 2367 log.go:172] (0xc0004cd220) (1) Data frame sent\nI0520 23:55:51.408554 2367 log.go:172] (0xc0006f8160) (0xc0004cd220) Stream removed, broadcasting: 1\nI0520 23:55:51.408578 2367 log.go:172] (0xc0006f8160) Go away received\nI0520 23:55:51.408960 2367 log.go:172] (0xc0006f8160) (0xc0004cd220) Stream removed, broadcasting: 1\nI0520 23:55:51.408978 2367 log.go:172] (0xc0006f8160) (0xc000182dc0) Stream removed, broadcasting: 3\nI0520 23:55:51.408987 2367 log.go:172] (0xc0006f8160) (0xc0000edf40) Stream removed, broadcasting: 5\n" May 20 23:55:51.414: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2922.svc.cluster.local\tcanonical name = externalsvc.services-2922.svc.cluster.local.\nName:\texternalsvc.services-2922.svc.cluster.local\nAddress: 10.96.81.77\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2922, will wait for the garbage collector to delete the pods May 20 23:55:51.475: INFO: Deleting ReplicationController externalsvc took: 6.272755ms May 20 23:55:51.775: INFO: Terminating ReplicationController externalsvc pods took: 300.281001ms May 20 23:56:05.343: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:56:05.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2922" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.150 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":81,"skipped":1436,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:56:05.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 20 23:56:09.488: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2568 PodName:var-expansion-a32426c6-584d-4a3f-8d62-044ca11d0ae7 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 23:56:09.488: INFO: >>> kubeConfig: /root/.kube/config I0520 23:56:09.522416 8 log.go:172] (0xc005ae0d10) (0xc0013a1040) Create stream I0520 23:56:09.522443 8 log.go:172] (0xc005ae0d10) (0xc0013a1040) Stream added, broadcasting: 1 I0520 23:56:09.529284 8 log.go:172] (0xc005ae0d10) Reply frame received for 1 I0520 23:56:09.529408 8 log.go:172] (0xc005ae0d10) (0xc001562320) Create stream I0520 23:56:09.529471 8 log.go:172] (0xc005ae0d10) (0xc001562320) Stream added, broadcasting: 3 I0520 23:56:09.531021 8 log.go:172] (0xc005ae0d10) Reply frame received for 3 I0520 23:56:09.531078 8 log.go:172] (0xc005ae0d10) (0xc0013a0000) Create stream I0520 23:56:09.531097 8 log.go:172] (0xc005ae0d10) (0xc0013a0000) Stream added, broadcasting: 5 I0520 23:56:09.531904 8 log.go:172] (0xc005ae0d10) Reply frame received for 5 I0520 23:56:09.586687 8 log.go:172] (0xc005ae0d10) Data frame received for 5 I0520 23:56:09.586768 8 log.go:172] (0xc0013a0000) (5) Data frame handling I0520 23:56:09.586856 8 log.go:172] (0xc005ae0d10) Data frame received for 3 I0520 23:56:09.586880 8 log.go:172] (0xc001562320) (3) Data frame handling I0520 23:56:09.588406 8 log.go:172] (0xc005ae0d10) Data frame received for 1 I0520 23:56:09.588434 8 log.go:172] (0xc0013a1040) (1) Data frame handling I0520 23:56:09.588442 8 log.go:172] (0xc0013a1040) (1) Data frame sent I0520 23:56:09.588453 8 log.go:172] (0xc005ae0d10) (0xc0013a1040) Stream removed, broadcasting: 1 I0520 23:56:09.588500 8 log.go:172] (0xc005ae0d10) Go away received I0520 23:56:09.588533 8 log.go:172] (0xc005ae0d10) (0xc0013a1040) Stream removed, broadcasting: 1 I0520 23:56:09.588550 8 log.go:172] (0xc005ae0d10) (0xc001562320) Stream removed, broadcasting: 3 I0520 23:56:09.588562 8 log.go:172] (0xc005ae0d10) (0xc0013a0000) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 20 23:56:09.594: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2568 PodName:var-expansion-a32426c6-584d-4a3f-8d62-044ca11d0ae7 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 20 23:56:09.594: INFO: >>> kubeConfig: /root/.kube/config I0520 23:56:09.628233 8 log.go:172] (0xc002c8d6b0) (0xc00212c640) Create stream I0520 23:56:09.628264 8 log.go:172] (0xc002c8d6b0) (0xc00212c640) Stream added, broadcasting: 1 I0520 23:56:09.630614 8 log.go:172] (0xc002c8d6b0) Reply frame received for 1 I0520 23:56:09.630671 8 log.go:172] (0xc002c8d6b0) (0xc0020bc000) Create stream I0520 23:56:09.630694 8 log.go:172] (0xc002c8d6b0) (0xc0020bc000) Stream added, broadcasting: 3 I0520 23:56:09.631930 8 log.go:172] (0xc002c8d6b0) Reply frame received for 3 I0520 23:56:09.631966 8 log.go:172] (0xc002c8d6b0) (0xc001958140) Create stream I0520 23:56:09.631975 8 log.go:172] (0xc002c8d6b0) (0xc001958140) Stream added, broadcasting: 5 I0520 23:56:09.632789 8 log.go:172] (0xc002c8d6b0) Reply frame received for 5 I0520 23:56:09.703536 8 log.go:172] (0xc002c8d6b0) Data frame received for 3 I0520 23:56:09.703570 8 log.go:172] (0xc002c8d6b0) Data frame received for 5 I0520 23:56:09.703593 8 log.go:172] (0xc001958140) (5) Data frame handling I0520 23:56:09.703611 8 log.go:172] (0xc0020bc000) (3) Data frame handling I0520 23:56:09.704818 8 log.go:172] (0xc002c8d6b0) Data frame received for 1 I0520 23:56:09.704837 8 log.go:172] (0xc00212c640) (1) Data frame handling I0520 23:56:09.704846 8 log.go:172] (0xc00212c640) (1) Data frame sent I0520 23:56:09.704856 8 log.go:172] (0xc002c8d6b0) (0xc00212c640) Stream removed, broadcasting: 1 I0520 23:56:09.704869 8 log.go:172] (0xc002c8d6b0) Go away received I0520 23:56:09.704987 8 log.go:172] (0xc002c8d6b0) (0xc00212c640) Stream removed, broadcasting: 1 I0520 23:56:09.705004 8 log.go:172] (0xc002c8d6b0) (0xc0020bc000) Stream removed, broadcasting: 3 I0520 23:56:09.705020 8 log.go:172] (0xc002c8d6b0) (0xc001958140) Stream removed, broadcasting: 5 STEP: updating the annotation value May 20 23:56:10.216: INFO: Successfully updated pod "var-expansion-a32426c6-584d-4a3f-8d62-044ca11d0ae7" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 20 23:56:10.263: INFO: Deleting pod "var-expansion-a32426c6-584d-4a3f-8d62-044ca11d0ae7" in namespace "var-expansion-2568" May 20 23:56:10.270: INFO: Wait up to 5m0s for pod "var-expansion-a32426c6-584d-4a3f-8d62-044ca11d0ae7" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:56:56.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2568" for this suite. • [SLOW TEST:50.938 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":82,"skipped":1454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:56:56.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 20 23:58:56.987: INFO: Successfully updated pod "var-expansion-f29ad997-627d-4e76-a8d2-166283b11e98" STEP: waiting for pod running STEP: deleting the pod gracefully May 20 23:58:59.016: INFO: Deleting pod "var-expansion-f29ad997-627d-4e76-a8d2-166283b11e98" in namespace "var-expansion-1972" May 20 23:58:59.023: INFO: Wait up to 5m0s for pod "var-expansion-f29ad997-627d-4e76-a8d2-166283b11e98" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:59:33.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1972" for this suite. • [SLOW TEST:156.746 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":83,"skipped":1480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:59:33.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 20 23:59:33.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a" in namespace "downward-api-1936" to be "Succeeded or Failed" May 20 23:59:33.235: INFO: Pod "downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.779803ms May 20 23:59:35.239: INFO: Pod "downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025901262s May 20 23:59:37.243: INFO: Pod "downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030069875s STEP: Saw pod success May 20 23:59:37.243: INFO: Pod "downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a" satisfied condition "Succeeded or Failed" May 20 23:59:37.246: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a container client-container: STEP: delete the pod May 20 23:59:37.292: INFO: Waiting for pod downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a to disappear May 20 23:59:37.298: INFO: Pod downwardapi-volume-0afd40e7-e610-4922-90ba-2ca598ac0e9a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:59:37.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1936" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":84,"skipped":1519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:59:37.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-fc9c5e11-cae1-45fe-b917-ace68ad79e74 STEP: Creating a pod to test consume secrets May 20 23:59:37.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71" in namespace "projected-546" to be "Succeeded or Failed" May 20 23:59:37.412: INFO: Pod "pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71": Phase="Pending", Reason="", readiness=false. Elapsed: 27.969088ms May 20 23:59:39.415: INFO: Pod "pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031832116s May 20 23:59:41.420: INFO: Pod "pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036073509s STEP: Saw pod success May 20 23:59:41.420: INFO: Pod "pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71" satisfied condition "Succeeded or Failed" May 20 23:59:41.423: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71 container projected-secret-volume-test: STEP: delete the pod May 20 23:59:41.466: INFO: Waiting for pod pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71 to disappear May 20 23:59:41.523: INFO: Pod pod-projected-secrets-757f533a-be2c-4d1e-a772-894db2f46a71 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 20 23:59:41.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-546" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":85,"skipped":1548,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 20 23:59:41.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-8990c4d7-b605-4ac0-b2ac-8f78351bd14f in namespace container-probe-791 May 20 23:59:45.678: INFO: Started pod busybox-8990c4d7-b605-4ac0-b2ac-8f78351bd14f in namespace container-probe-791 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:59:45.681: INFO: Initial restart count of pod busybox-8990c4d7-b605-4ac0-b2ac-8f78351bd14f is 0 May 21 00:00:33.794: INFO: Restart count of pod container-probe-791/busybox-8990c4d7-b605-4ac0-b2ac-8f78351bd14f is now 1 (48.112399449s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:00:33.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-791" for this suite. • [SLOW TEST:52.343 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1559,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:00:33.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 00:00:38.551: INFO: Successfully updated pod "annotationupdatec72fa18f-0381-41a6-90c3-b400bd28b990" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:00:42.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8216" for this suite. • [SLOW TEST:8.720 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":87,"skipped":1578,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:00:42.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5248 STEP: creating service affinity-nodeport-transition in namespace services-5248 STEP: creating replication controller affinity-nodeport-transition in namespace services-5248 I0521 00:00:42.750754 8 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5248, replica count: 3 I0521 00:00:45.801653 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:00:48.801913 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 00:00:48.896: INFO: Creating new exec pod May 21 00:00:53.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5248 execpod-affinitywvtvn -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 21 00:00:54.142: INFO: stderr: "I0521 00:00:54.046456 2390 log.go:172] (0xc0009f1340) (0xc00082bcc0) Create stream\nI0521 00:00:54.046510 2390 log.go:172] (0xc0009f1340) (0xc00082bcc0) Stream added, broadcasting: 1\nI0521 00:00:54.048575 2390 log.go:172] (0xc0009f1340) Reply frame received for 1\nI0521 00:00:54.048622 2390 log.go:172] (0xc0009f1340) (0xc000a5a6e0) Create stream\nI0521 00:00:54.048635 2390 log.go:172] (0xc0009f1340) (0xc000a5a6e0) Stream added, broadcasting: 3\nI0521 00:00:54.049783 2390 log.go:172] (0xc0009f1340) Reply frame received for 3\nI0521 00:00:54.049823 2390 log.go:172] (0xc0009f1340) (0xc000832aa0) Create stream\nI0521 00:00:54.049837 2390 log.go:172] (0xc0009f1340) (0xc000832aa0) Stream added, broadcasting: 5\nI0521 00:00:54.050553 2390 log.go:172] (0xc0009f1340) Reply frame received for 5\nI0521 00:00:54.125896 2390 log.go:172] (0xc0009f1340) Data frame received for 5\nI0521 00:00:54.125939 2390 log.go:172] (0xc000832aa0) (5) Data frame handling\nI0521 00:00:54.125960 2390 log.go:172] (0xc000832aa0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0521 00:00:54.133387 2390 log.go:172] (0xc0009f1340) Data frame received for 5\nI0521 00:00:54.133407 2390 log.go:172] (0xc000832aa0) (5) Data frame handling\nI0521 00:00:54.133420 2390 log.go:172] (0xc000832aa0) (5) Data frame sent\nI0521 00:00:54.133428 2390 log.go:172] (0xc0009f1340) Data frame received for 5\nI0521 00:00:54.133435 2390 log.go:172] (0xc000832aa0) (5) Data frame handling\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0521 00:00:54.133698 2390 log.go:172] (0xc0009f1340) Data frame received for 3\nI0521 00:00:54.133708 2390 log.go:172] (0xc000a5a6e0) (3) Data frame handling\nI0521 00:00:54.135612 2390 log.go:172] (0xc0009f1340) Data frame received for 1\nI0521 00:00:54.135651 2390 log.go:172] (0xc00082bcc0) (1) Data frame handling\nI0521 00:00:54.135698 2390 log.go:172] (0xc00082bcc0) (1) Data frame sent\nI0521 00:00:54.135726 2390 log.go:172] (0xc0009f1340) (0xc00082bcc0) Stream removed, broadcasting: 1\nI0521 00:00:54.135745 2390 log.go:172] (0xc0009f1340) Go away received\nI0521 00:00:54.136310 2390 log.go:172] (0xc0009f1340) (0xc00082bcc0) Stream removed, broadcasting: 1\nI0521 00:00:54.136334 2390 log.go:172] (0xc0009f1340) (0xc000a5a6e0) Stream removed, broadcasting: 3\nI0521 00:00:54.136345 2390 log.go:172] (0xc0009f1340) (0xc000832aa0) Stream removed, broadcasting: 5\n" May 21 00:00:54.142: INFO: stdout: "" May 21 00:00:54.143: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5248 execpod-affinitywvtvn -- /bin/sh -x -c nc -zv -t -w 2 10.108.244.192 80' May 21 00:00:54.350: INFO: stderr: "I0521 00:00:54.275867 2408 log.go:172] (0xc000a25340) (0xc0009d0140) Create stream\nI0521 00:00:54.275916 2408 log.go:172] (0xc000a25340) (0xc0009d0140) Stream added, broadcasting: 1\nI0521 00:00:54.278276 2408 log.go:172] (0xc000a25340) Reply frame received for 1\nI0521 00:00:54.278323 2408 log.go:172] (0xc000a25340) (0xc0006d8fa0) Create stream\nI0521 00:00:54.278345 2408 log.go:172] (0xc000a25340) (0xc0006d8fa0) Stream added, broadcasting: 3\nI0521 00:00:54.279393 2408 log.go:172] (0xc000a25340) Reply frame received for 3\nI0521 00:00:54.279436 2408 log.go:172] (0xc000a25340) (0xc000537680) Create stream\nI0521 00:00:54.279473 2408 log.go:172] (0xc000a25340) (0xc000537680) Stream added, broadcasting: 5\nI0521 00:00:54.280298 2408 log.go:172] (0xc000a25340) Reply frame received for 5\nI0521 00:00:54.344606 2408 log.go:172] (0xc000a25340) Data frame received for 5\nI0521 00:00:54.344654 2408 log.go:172] (0xc000537680) (5) Data frame handling\nI0521 00:00:54.344672 2408 log.go:172] (0xc000537680) (5) Data frame sent\nI0521 00:00:54.344691 2408 log.go:172] (0xc000a25340) Data frame received for 5\nI0521 00:00:54.344700 2408 log.go:172] (0xc000537680) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.244.192 80\nConnection to 10.108.244.192 80 port [tcp/http] succeeded!\nI0521 00:00:54.344770 2408 log.go:172] (0xc000a25340) Data frame received for 3\nI0521 00:00:54.344797 2408 log.go:172] (0xc0006d8fa0) (3) Data frame handling\nI0521 00:00:54.345954 2408 log.go:172] (0xc000a25340) Data frame received for 1\nI0521 00:00:54.345982 2408 log.go:172] (0xc0009d0140) (1) Data frame handling\nI0521 00:00:54.345995 2408 log.go:172] (0xc0009d0140) (1) Data frame sent\nI0521 00:00:54.346006 2408 log.go:172] (0xc000a25340) (0xc0009d0140) Stream removed, broadcasting: 1\nI0521 00:00:54.346020 2408 log.go:172] (0xc000a25340) Go away received\nI0521 00:00:54.346374 2408 log.go:172] (0xc000a25340) (0xc0009d0140) Stream removed, broadcasting: 1\nI0521 00:00:54.346387 2408 log.go:172] (0xc000a25340) (0xc0006d8fa0) Stream removed, broadcasting: 3\nI0521 00:00:54.346394 2408 log.go:172] (0xc000a25340) (0xc000537680) Stream removed, broadcasting: 5\n" May 21 00:00:54.350: INFO: stdout: "" May 21 00:00:54.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5248 execpod-affinitywvtvn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30641' May 21 00:00:54.556: INFO: stderr: "I0521 00:00:54.476199 2424 log.go:172] (0xc0009b96b0) (0xc000852640) Create stream\nI0521 00:00:54.476393 2424 log.go:172] (0xc0009b96b0) (0xc000852640) Stream added, broadcasting: 1\nI0521 00:00:54.484747 2424 log.go:172] (0xc0009b96b0) Reply frame received for 1\nI0521 00:00:54.484793 2424 log.go:172] (0xc0009b96b0) (0xc000852fa0) Create stream\nI0521 00:00:54.484810 2424 log.go:172] (0xc0009b96b0) (0xc000852fa0) Stream added, broadcasting: 3\nI0521 00:00:54.487079 2424 log.go:172] (0xc0009b96b0) Reply frame received for 3\nI0521 00:00:54.487102 2424 log.go:172] (0xc0009b96b0) (0xc000853540) Create stream\nI0521 00:00:54.487110 2424 log.go:172] (0xc0009b96b0) (0xc000853540) Stream added, broadcasting: 5\nI0521 00:00:54.487920 2424 log.go:172] (0xc0009b96b0) Reply frame received for 5\nI0521 00:00:54.547079 2424 log.go:172] (0xc0009b96b0) Data frame received for 5\nI0521 00:00:54.547116 2424 log.go:172] (0xc000853540) (5) Data frame handling\nI0521 00:00:54.547137 2424 log.go:172] (0xc000853540) (5) Data frame sent\nI0521 00:00:54.547160 2424 log.go:172] (0xc0009b96b0) Data frame received for 5\nI0521 00:00:54.547171 2424 log.go:172] (0xc000853540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30641\nConnection to 172.17.0.13 30641 port [tcp/30641] succeeded!\nI0521 00:00:54.547192 2424 log.go:172] (0xc000853540) (5) Data frame sent\nI0521 00:00:54.547233 2424 log.go:172] (0xc0009b96b0) Data frame received for 5\nI0521 00:00:54.547249 2424 log.go:172] (0xc000853540) (5) Data frame handling\nI0521 00:00:54.548001 2424 log.go:172] (0xc0009b96b0) Data frame received for 3\nI0521 00:00:54.548022 2424 log.go:172] (0xc000852fa0) (3) Data frame handling\nI0521 00:00:54.550096 2424 log.go:172] (0xc0009b96b0) Data frame received for 1\nI0521 00:00:54.550114 2424 log.go:172] (0xc000852640) (1) Data frame handling\nI0521 00:00:54.550124 2424 log.go:172] (0xc000852640) (1) Data frame sent\nI0521 00:00:54.550134 2424 log.go:172] (0xc0009b96b0) (0xc000852640) Stream removed, broadcasting: 1\nI0521 00:00:54.550835 2424 log.go:172] (0xc0009b96b0) Go away received\nI0521 00:00:54.551140 2424 log.go:172] (0xc0009b96b0) (0xc000852640) Stream removed, broadcasting: 1\nI0521 00:00:54.551173 2424 log.go:172] (0xc0009b96b0) (0xc000852fa0) Stream removed, broadcasting: 3\nI0521 00:00:54.551188 2424 log.go:172] (0xc0009b96b0) (0xc000853540) Stream removed, broadcasting: 5\n" May 21 00:00:54.556: INFO: stdout: "" May 21 00:00:54.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5248 execpod-affinitywvtvn -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30641' May 21 00:00:54.765: INFO: stderr: "I0521 00:00:54.691045 2443 log.go:172] (0xc000a54e70) (0xc00071ef00) Create stream\nI0521 00:00:54.691114 2443 log.go:172] (0xc000a54e70) (0xc00071ef00) Stream added, broadcasting: 1\nI0521 00:00:54.693716 2443 log.go:172] (0xc000a54e70) Reply frame received for 1\nI0521 00:00:54.693762 2443 log.go:172] (0xc000a54e70) (0xc000b88140) Create stream\nI0521 00:00:54.693775 2443 log.go:172] (0xc000a54e70) (0xc000b88140) Stream added, broadcasting: 3\nI0521 00:00:54.694765 2443 log.go:172] (0xc000a54e70) Reply frame received for 3\nI0521 00:00:54.694803 2443 log.go:172] (0xc000a54e70) (0xc000b36500) Create stream\nI0521 00:00:54.694817 2443 log.go:172] (0xc000a54e70) (0xc000b36500) Stream added, broadcasting: 5\nI0521 00:00:54.695620 2443 log.go:172] (0xc000a54e70) Reply frame received for 5\nI0521 00:00:54.757992 2443 log.go:172] (0xc000a54e70) Data frame received for 5\nI0521 00:00:54.758032 2443 log.go:172] (0xc000b36500) (5) Data frame handling\nI0521 00:00:54.758046 2443 log.go:172] (0xc000b36500) (5) Data frame sent\nI0521 00:00:54.758054 2443 log.go:172] (0xc000a54e70) Data frame received for 5\nI0521 00:00:54.758061 2443 log.go:172] (0xc000b36500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30641\nConnection to 172.17.0.12 30641 port [tcp/30641] succeeded!\nI0521 00:00:54.758091 2443 log.go:172] (0xc000a54e70) Data frame received for 3\nI0521 00:00:54.758102 2443 log.go:172] (0xc000b88140) (3) Data frame handling\nI0521 00:00:54.759363 2443 log.go:172] (0xc000a54e70) Data frame received for 1\nI0521 00:00:54.759387 2443 log.go:172] (0xc00071ef00) (1) Data frame handling\nI0521 00:00:54.759413 2443 log.go:172] (0xc00071ef00) (1) Data frame sent\nI0521 00:00:54.759431 2443 log.go:172] (0xc000a54e70) (0xc00071ef00) Stream removed, broadcasting: 1\nI0521 00:00:54.759525 2443 log.go:172] (0xc000a54e70) Go away received\nI0521 00:00:54.759814 2443 log.go:172] (0xc000a54e70) (0xc00071ef00) Stream removed, broadcasting: 1\nI0521 00:00:54.759834 2443 log.go:172] (0xc000a54e70) (0xc000b88140) Stream removed, broadcasting: 3\nI0521 00:00:54.759847 2443 log.go:172] (0xc000a54e70) (0xc000b36500) Stream removed, broadcasting: 5\n" May 21 00:00:54.765: INFO: stdout: "" May 21 00:00:54.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5248 execpod-affinitywvtvn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30641/ ; done' May 21 00:00:55.126: INFO: stderr: "I0521 00:00:54.908990 2463 log.go:172] (0xc000b236b0) (0xc000904820) Create stream\nI0521 00:00:54.909036 2463 log.go:172] (0xc000b236b0) (0xc000904820) Stream added, broadcasting: 1\nI0521 00:00:54.910667 2463 log.go:172] (0xc000b236b0) Reply frame received for 1\nI0521 00:00:54.910694 2463 log.go:172] (0xc000b236b0) (0xc0008e2460) Create stream\nI0521 00:00:54.910700 2463 log.go:172] (0xc000b236b0) (0xc0008e2460) Stream added, broadcasting: 3\nI0521 00:00:54.911464 2463 log.go:172] (0xc000b236b0) Reply frame received for 3\nI0521 00:00:54.911523 2463 log.go:172] (0xc000b236b0) (0xc0008ba460) Create stream\nI0521 00:00:54.911549 2463 log.go:172] (0xc000b236b0) (0xc0008ba460) Stream added, broadcasting: 5\nI0521 00:00:54.912278 2463 log.go:172] (0xc000b236b0) Reply frame received for 5\nI0521 00:00:54.971880 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:54.971911 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:54.971927 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:54.971948 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:54.971956 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:54.971965 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.019537 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.019574 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.019596 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.020356 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.020401 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.020422 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\nI0521 00:00:55.020452 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.020463 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.020499 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\nI0521 00:00:55.020536 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.020546 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.020554 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.027371 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.027390 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.027403 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.028030 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.028060 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.028071 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.028092 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.028114 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.028124 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.035143 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.035169 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.035185 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.035823 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.035853 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.035862 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.035896 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.035944 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.035998 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.042325 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.042348 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.042373 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.043062 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.043122 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.043150 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.043188 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.043215 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.043251 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.050900 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.050928 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.050942 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.051427 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.051472 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.051502 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.051528 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.051564 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.051592 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.056021 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.056039 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.056054 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.056412 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.056426 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.056433 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.056447 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.056464 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.056488 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.063450 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.063461 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.063474 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.064147 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.064160 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.064167 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.064185 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.064204 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.064219 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\nI0521 00:00:55.064230 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.064238 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.064259 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\nI0521 00:00:55.068830 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.068857 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.068879 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.069516 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.069532 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.069542 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.069558 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.069570 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.069577 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.073511 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.073543 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.073576 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.074476 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.074487 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.074495 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.074680 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.074711 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.074735 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.078116 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.078150 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.078173 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.078960 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.078977 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.078992 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.079014 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.079030 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.079057 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.086264 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.086289 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.086310 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.086971 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.087001 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.087035 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.087062 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.087078 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.087104 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.090594 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.090632 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.090656 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.090891 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.090915 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.090943 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.090964 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.090991 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.091018 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\nI0521 00:00:55.091037 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.091054 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.091131 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\nI0521 00:00:55.097019 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.097047 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.097073 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.098053 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.098111 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.098138 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.098184 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.098218 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.098251 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.104419 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.104433 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.104445 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.104996 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.105011 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.105019 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.105027 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.105059 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.105070 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.110218 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.110245 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.110269 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.110949 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.110963 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.110972 2463 log.go:172] (0xc0008ba460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.111040 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.111061 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.111077 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.116564 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.116587 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.116602 2463 log.go:172] (0xc0008e2460) (3) Data frame sent\nI0521 00:00:55.117557 2463 log.go:172] (0xc000b236b0) Data frame received for 3\nI0521 00:00:55.117583 2463 log.go:172] (0xc0008e2460) (3) Data frame handling\nI0521 00:00:55.117601 2463 log.go:172] (0xc000b236b0) Data frame received for 5\nI0521 00:00:55.117621 2463 log.go:172] (0xc0008ba460) (5) Data frame handling\nI0521 00:00:55.119874 2463 log.go:172] (0xc000b236b0) Data frame received for 1\nI0521 00:00:55.119891 2463 log.go:172] (0xc000904820) (1) Data frame handling\nI0521 00:00:55.119901 2463 log.go:172] (0xc000904820) (1) Data frame sent\nI0521 00:00:55.119952 2463 log.go:172] (0xc000b236b0) (0xc000904820) Stream removed, broadcasting: 1\nI0521 00:00:55.119975 2463 log.go:172] (0xc000b236b0) Go away received\nI0521 00:00:55.120423 2463 log.go:172] (0xc000b236b0) (0xc000904820) Stream removed, broadcasting: 1\nI0521 00:00:55.120444 2463 log.go:172] (0xc000b236b0) (0xc0008e2460) Stream removed, broadcasting: 3\nI0521 00:00:55.120452 2463 log.go:172] (0xc000b236b0) (0xc0008ba460) Stream removed, broadcasting: 5\n" May 21 00:00:55.126: INFO: stdout: "\naffinity-nodeport-transition-jl676\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-gbv55\naffinity-nodeport-transition-gbv55\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-jl676\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-jl676\naffinity-nodeport-transition-gbv55\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-gbv55\naffinity-nodeport-transition-gbv55\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-gbv55" May 21 00:00:55.126: INFO: Received response from host: May 21 00:00:55.126: INFO: Received response from host: affinity-nodeport-transition-jl676 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-gbv55 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-gbv55 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-jl676 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-jl676 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-gbv55 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-gbv55 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-gbv55 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.127: INFO: Received response from host: affinity-nodeport-transition-gbv55 May 21 00:00:55.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5248 execpod-affinitywvtvn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:30641/ ; done' May 21 00:00:55.439: INFO: stderr: "I0521 00:00:55.280543 2481 log.go:172] (0xc000527810) (0xc0003fae60) Create stream\nI0521 00:00:55.280595 2481 log.go:172] (0xc000527810) (0xc0003fae60) Stream added, broadcasting: 1\nI0521 00:00:55.283365 2481 log.go:172] (0xc000527810) Reply frame received for 1\nI0521 00:00:55.283431 2481 log.go:172] (0xc000527810) (0xc00067b2c0) Create stream\nI0521 00:00:55.283445 2481 log.go:172] (0xc000527810) (0xc00067b2c0) Stream added, broadcasting: 3\nI0521 00:00:55.284535 2481 log.go:172] (0xc000527810) Reply frame received for 3\nI0521 00:00:55.284583 2481 log.go:172] (0xc000527810) (0xc0003652c0) Create stream\nI0521 00:00:55.284609 2481 log.go:172] (0xc000527810) (0xc0003652c0) Stream added, broadcasting: 5\nI0521 00:00:55.285950 2481 log.go:172] (0xc000527810) Reply frame received for 5\nI0521 00:00:55.344409 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.344460 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.344483 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.344521 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.344636 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.344680 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.348183 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.348200 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.348209 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.348664 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.348684 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.348693 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.348707 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.348712 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.348717 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.355603 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.355624 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.355636 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.356143 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.356171 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.356179 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.356190 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.356195 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.356201 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.360896 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.360923 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.360948 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.362081 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.362103 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.362117 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.362144 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.362155 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.362165 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\nI0521 00:00:55.362177 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.362206 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.362244 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\nI0521 00:00:55.368455 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.368473 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.368487 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.368958 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.368979 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.368986 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.369032 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.369058 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.369071 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.372742 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.372763 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.372788 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.373334 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.373373 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.373387 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.373408 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.373418 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.373431 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.378194 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.378216 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.378236 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.378721 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.378745 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.378754 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.378766 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.378773 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.378786 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.382813 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.382855 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.382882 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.383168 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.383191 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.383199 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.383213 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.383220 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.383234 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.386735 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.386777 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.386795 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.387058 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.387071 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.387083 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.387114 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.387151 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.387177 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.393849 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.393880 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.393912 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.394455 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.394482 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.394498 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.394532 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.394544 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.394558 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.399718 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.399741 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.399762 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.400178 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.400197 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.400222 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.400246 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.400266 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.400283 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.405095 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.405106 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.405250 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.405954 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.405986 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.406000 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.406017 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.406037 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.406049 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.411499 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.411514 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.411529 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.412133 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.412151 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.412165 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.412196 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.412208 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.412219 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.416193 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.416204 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.416214 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.416566 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.416597 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.416614 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.416632 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.416644 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.416662 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.421054 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.421072 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.421087 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.421824 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.421858 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.421893 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.421918 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.421930 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.421940 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.426095 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.426112 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.426120 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.426666 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.426689 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.426706 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.426731 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.426743 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.426756 2481 log.go:172] (0xc0003652c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:30641/\nI0521 00:00:55.431251 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.431274 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.431299 2481 log.go:172] (0xc00067b2c0) (3) Data frame sent\nI0521 00:00:55.431861 2481 log.go:172] (0xc000527810) Data frame received for 3\nI0521 00:00:55.431884 2481 log.go:172] (0xc00067b2c0) (3) Data frame handling\nI0521 00:00:55.432104 2481 log.go:172] (0xc000527810) Data frame received for 5\nI0521 00:00:55.432121 2481 log.go:172] (0xc0003652c0) (5) Data frame handling\nI0521 00:00:55.433834 2481 log.go:172] (0xc000527810) Data frame received for 1\nI0521 00:00:55.433849 2481 log.go:172] (0xc0003fae60) (1) Data frame handling\nI0521 00:00:55.433858 2481 log.go:172] (0xc0003fae60) (1) Data frame sent\nI0521 00:00:55.433885 2481 log.go:172] (0xc000527810) (0xc0003fae60) Stream removed, broadcasting: 1\nI0521 00:00:55.433932 2481 log.go:172] (0xc000527810) Go away received\nI0521 00:00:55.434774 2481 log.go:172] (0xc000527810) (0xc0003fae60) Stream removed, broadcasting: 1\nI0521 00:00:55.434820 2481 log.go:172] (0xc000527810) (0xc00067b2c0) Stream removed, broadcasting: 3\nI0521 00:00:55.434832 2481 log.go:172] (0xc000527810) (0xc0003652c0) Stream removed, broadcasting: 5\n" May 21 00:00:55.440: INFO: stdout: "\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9\naffinity-nodeport-transition-ftch9" May 21 00:00:55.440: INFO: Received response from host: May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Received response from host: affinity-nodeport-transition-ftch9 May 21 00:00:55.440: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-5248, will wait for the garbage collector to delete the pods May 21 00:00:55.675: INFO: Deleting ReplicationController affinity-nodeport-transition took: 96.431416ms May 21 00:00:56.075: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.285425ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:01:05.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5248" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:22.730 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":88,"skipped":1590,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:01:05.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-4bc70977-b296-4f32-a239-54f346047191 STEP: Creating a pod to test consume configMaps May 21 00:01:05.447: INFO: Waiting up to 5m0s for pod "pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c" in namespace "configmap-3557" to be "Succeeded or Failed" May 21 00:01:05.450: INFO: Pod "pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601009ms May 21 00:01:07.455: INFO: Pod "pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00787855s May 21 00:01:09.458: INFO: Pod "pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011833009s STEP: Saw pod success May 21 00:01:09.459: INFO: Pod "pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c" satisfied condition "Succeeded or Failed" May 21 00:01:09.461: INFO: Trying to get logs from node latest-worker pod pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c container configmap-volume-test: STEP: delete the pod May 21 00:01:09.537: INFO: Waiting for pod pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c to disappear May 21 00:01:09.565: INFO: Pod pod-configmaps-38d7717b-aa57-4736-8c69-9cfc09672f9c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:01:09.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3557" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:01:09.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:01:16.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7998" for this suite. • [SLOW TEST:7.172 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":90,"skipped":1671,"failed":0} SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:01:16.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 21 00:01:16.812: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4595" to be "Succeeded or Failed" May 21 00:01:16.828: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.972253ms May 21 00:01:18.835: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022861992s May 21 00:01:20.840: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027119648s May 21 00:01:22.843: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03082815s STEP: Saw pod success May 21 00:01:22.843: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 21 00:01:22.846: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 21 00:01:22.920: INFO: Waiting for pod pod-host-path-test to disappear May 21 00:01:22.937: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:01:22.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4595" for this suite. • [SLOW TEST:6.199 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":91,"skipped":1673,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:01:22.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-41a0da1d-6e8c-4ca3-a84d-709474469ae1 STEP: Creating configMap with name cm-test-opt-upd-2983fab6-eeff-40c7-987a-ba5fd9681a6b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-41a0da1d-6e8c-4ca3-a84d-709474469ae1 STEP: Updating configmap cm-test-opt-upd-2983fab6-eeff-40c7-987a-ba5fd9681a6b STEP: Creating configMap with name cm-test-opt-create-45a96edf-31c7-423a-9633-1d9628d7b826 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:01:31.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3310" for this suite. • [SLOW TEST:8.297 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":92,"skipped":1695,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:01:31.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-ee4765f1-7c32-4dbb-9780-ba92df035c69 STEP: Creating a pod to test consume secrets May 21 00:01:31.334: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76" in namespace "projected-737" to be "Succeeded or Failed" May 21 00:01:31.354: INFO: Pod "pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76": Phase="Pending", Reason="", readiness=false. Elapsed: 20.390158ms May 21 00:01:33.530: INFO: Pod "pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196402862s May 21 00:01:35.536: INFO: Pod "pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.201659237s STEP: Saw pod success May 21 00:01:35.536: INFO: Pod "pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76" satisfied condition "Succeeded or Failed" May 21 00:01:35.539: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76 container projected-secret-volume-test: STEP: delete the pod May 21 00:01:35.652: INFO: Waiting for pod pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76 to disappear May 21 00:01:35.661: INFO: Pod pod-projected-secrets-d13dae6c-82c2-4ac8-8e39-bacbc006bb76 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:01:35.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-737" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":93,"skipped":1696,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:01:35.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8205.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8205.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8205.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8205.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8205.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 255.221.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.221.255_udp@PTR;check="$$(dig +tcp +noall +answer +search 255.221.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.221.255_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8205.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8205.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8205.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8205.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8205.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8205.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 255.221.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.221.255_udp@PTR;check="$$(dig +tcp +noall +answer +search 255.221.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.221.255_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 00:01:43.867: INFO: Unable to read wheezy_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.871: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.874: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.877: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.900: INFO: Unable to read jessie_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.904: INFO: Unable to read jessie_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.907: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.910: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:43.930: INFO: Lookups using dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be failed for: [wheezy_udp@dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_udp@dns-test-service.dns-8205.svc.cluster.local jessie_tcp@dns-test-service.dns-8205.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local] May 21 00:01:48.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.944: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.948: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.969: INFO: Unable to read jessie_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.972: INFO: Unable to read jessie_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.978: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:48.994: INFO: Lookups using dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be failed for: [wheezy_udp@dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_udp@dns-test-service.dns-8205.svc.cluster.local jessie_tcp@dns-test-service.dns-8205.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local] May 21 00:01:53.935: INFO: Unable to read wheezy_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.937: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.939: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.941: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.958: INFO: Unable to read jessie_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.961: INFO: Unable to read jessie_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.963: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.973: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:53.987: INFO: Lookups using dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be failed for: [wheezy_udp@dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_udp@dns-test-service.dns-8205.svc.cluster.local jessie_tcp@dns-test-service.dns-8205.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local] May 21 00:01:58.934: INFO: Unable to read wheezy_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.940: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.942: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.961: INFO: Unable to read jessie_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.963: INFO: Unable to read jessie_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.966: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.968: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:01:58.983: INFO: Lookups using dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be failed for: [wheezy_udp@dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_udp@dns-test-service.dns-8205.svc.cluster.local jessie_tcp@dns-test-service.dns-8205.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local] May 21 00:02:03.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.943: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.946: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.967: INFO: Unable to read jessie_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.969: INFO: Unable to read jessie_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.972: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.974: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:03.991: INFO: Lookups using dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be failed for: [wheezy_udp@dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_udp@dns-test-service.dns-8205.svc.cluster.local jessie_tcp@dns-test-service.dns-8205.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local] May 21 00:02:08.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.944: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.948: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.972: INFO: Unable to read jessie_udp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.974: INFO: Unable to read jessie_tcp@dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.976: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.979: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local from pod dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be: the server could not find the requested resource (get pods dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be) May 21 00:02:08.995: INFO: Lookups using dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be failed for: [wheezy_udp@dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@dns-test-service.dns-8205.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_udp@dns-test-service.dns-8205.svc.cluster.local jessie_tcp@dns-test-service.dns-8205.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8205.svc.cluster.local] May 21 00:02:13.993: INFO: DNS probes using dns-8205/dns-test-3758a193-f2f4-4c0f-9958-a3e333ceb4be succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:02:14.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8205" for this suite. • [SLOW TEST:39.339 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":94,"skipped":1707,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:02:15.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:02:32.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4584" for this suite. • [SLOW TEST:17.164 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":95,"skipped":1707,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:02:32.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6162 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-6162 May 21 00:02:32.310: INFO: Found 0 stateful pods, waiting for 1 May 21 00:02:42.315: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 00:02:42.374: INFO: Deleting all statefulset in ns statefulset-6162 May 21 00:02:42.388: INFO: Scaling statefulset ss to 0 May 21 00:03:02.494: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:03:02.497: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:03:02.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6162" for this suite. • [SLOW TEST:30.346 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":96,"skipped":1717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:03:02.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 21 00:03:02.625: INFO: Waiting up to 5m0s for pod "pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca" in namespace "emptydir-3612" to be "Succeeded or Failed" May 21 00:03:02.649: INFO: Pod "pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca": Phase="Pending", Reason="", readiness=false. Elapsed: 23.87149ms May 21 00:03:04.652: INFO: Pod "pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02740296s May 21 00:03:06.656: INFO: Pod "pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031023083s STEP: Saw pod success May 21 00:03:06.656: INFO: Pod "pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca" satisfied condition "Succeeded or Failed" May 21 00:03:06.659: INFO: Trying to get logs from node latest-worker pod pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca container test-container: STEP: delete the pod May 21 00:03:06.868: INFO: Waiting for pod pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca to disappear May 21 00:03:06.879: INFO: Pod pod-5a6ec3f9-73d0-46e3-ac74-cd24127610ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:03:06.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3612" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":97,"skipped":1741,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:03:06.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2172 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2172 I0521 00:03:07.141882 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2172, replica count: 2 I0521 00:03:10.192383 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:03:13.192657 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 00:03:13.192: INFO: Creating new exec pod May 21 00:03:18.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2172 execpodhf9dc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 21 00:03:18.460: INFO: stderr: "I0521 00:03:18.376422 2502 log.go:172] (0xc000afd810) (0xc000611c20) Create stream\nI0521 00:03:18.376488 2502 log.go:172] (0xc000afd810) (0xc000611c20) Stream added, broadcasting: 1\nI0521 00:03:18.381934 2502 log.go:172] (0xc000afd810) Reply frame received for 1\nI0521 00:03:18.381979 2502 log.go:172] (0xc000afd810) (0xc00063ea00) Create stream\nI0521 00:03:18.381994 2502 log.go:172] (0xc000afd810) (0xc00063ea00) Stream added, broadcasting: 3\nI0521 00:03:18.382925 2502 log.go:172] (0xc000afd810) Reply frame received for 3\nI0521 00:03:18.382987 2502 log.go:172] (0xc000afd810) (0xc0005c8dc0) Create stream\nI0521 00:03:18.383005 2502 log.go:172] (0xc000afd810) (0xc0005c8dc0) Stream added, broadcasting: 5\nI0521 00:03:18.383890 2502 log.go:172] (0xc000afd810) Reply frame received for 5\nI0521 00:03:18.451291 2502 log.go:172] (0xc000afd810) Data frame received for 5\nI0521 00:03:18.451326 2502 log.go:172] (0xc0005c8dc0) (5) Data frame handling\nI0521 00:03:18.451376 2502 log.go:172] (0xc0005c8dc0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0521 00:03:18.451587 2502 log.go:172] (0xc000afd810) Data frame received for 5\nI0521 00:03:18.451628 2502 log.go:172] (0xc0005c8dc0) (5) Data frame handling\nI0521 00:03:18.451677 2502 log.go:172] (0xc0005c8dc0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0521 00:03:18.451992 2502 log.go:172] (0xc000afd810) Data frame received for 5\nI0521 00:03:18.452098 2502 log.go:172] (0xc0005c8dc0) (5) Data frame handling\nI0521 00:03:18.452214 2502 log.go:172] (0xc000afd810) Data frame received for 3\nI0521 00:03:18.452317 2502 log.go:172] (0xc00063ea00) (3) Data frame handling\nI0521 00:03:18.454146 2502 log.go:172] (0xc000afd810) Data frame received for 1\nI0521 00:03:18.454182 2502 log.go:172] (0xc000611c20) (1) Data frame handling\nI0521 00:03:18.454210 2502 log.go:172] (0xc000611c20) (1) Data frame sent\nI0521 00:03:18.454233 2502 log.go:172] (0xc000afd810) (0xc000611c20) Stream removed, broadcasting: 1\nI0521 00:03:18.454261 2502 log.go:172] (0xc000afd810) Go away received\nI0521 00:03:18.454702 2502 log.go:172] (0xc000afd810) (0xc000611c20) Stream removed, broadcasting: 1\nI0521 00:03:18.454749 2502 log.go:172] (0xc000afd810) (0xc00063ea00) Stream removed, broadcasting: 3\nI0521 00:03:18.454767 2502 log.go:172] (0xc000afd810) (0xc0005c8dc0) Stream removed, broadcasting: 5\n" May 21 00:03:18.460: INFO: stdout: "" May 21 00:03:18.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2172 execpodhf9dc -- /bin/sh -x -c nc -zv -t -w 2 10.111.49.206 80' May 21 00:03:18.683: INFO: stderr: "I0521 00:03:18.612869 2523 log.go:172] (0xc000a604d0) (0xc00083e5a0) Create stream\nI0521 00:03:18.612927 2523 log.go:172] (0xc000a604d0) (0xc00083e5a0) Stream added, broadcasting: 1\nI0521 00:03:18.616065 2523 log.go:172] (0xc000a604d0) Reply frame received for 1\nI0521 00:03:18.616107 2523 log.go:172] (0xc000a604d0) (0xc00083ee60) Create stream\nI0521 00:03:18.616123 2523 log.go:172] (0xc000a604d0) (0xc00083ee60) Stream added, broadcasting: 3\nI0521 00:03:18.617848 2523 log.go:172] (0xc000a604d0) Reply frame received for 3\nI0521 00:03:18.617894 2523 log.go:172] (0xc000a604d0) (0xc00083f360) Create stream\nI0521 00:03:18.617917 2523 log.go:172] (0xc000a604d0) (0xc00083f360) Stream added, broadcasting: 5\nI0521 00:03:18.619280 2523 log.go:172] (0xc000a604d0) Reply frame received for 5\nI0521 00:03:18.676413 2523 log.go:172] (0xc000a604d0) Data frame received for 5\nI0521 00:03:18.676478 2523 log.go:172] (0xc00083f360) (5) Data frame handling\nI0521 00:03:18.676507 2523 log.go:172] (0xc00083f360) (5) Data frame sent\nI0521 00:03:18.676527 2523 log.go:172] (0xc000a604d0) Data frame received for 5\nI0521 00:03:18.676551 2523 log.go:172] (0xc00083f360) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.49.206 80\nConnection to 10.111.49.206 80 port [tcp/http] succeeded!\nI0521 00:03:18.676584 2523 log.go:172] (0xc00083f360) (5) Data frame sent\nI0521 00:03:18.676611 2523 log.go:172] (0xc000a604d0) Data frame received for 3\nI0521 00:03:18.676625 2523 log.go:172] (0xc00083ee60) (3) Data frame handling\nI0521 00:03:18.676771 2523 log.go:172] (0xc000a604d0) Data frame received for 5\nI0521 00:03:18.676789 2523 log.go:172] (0xc00083f360) (5) Data frame handling\nI0521 00:03:18.678948 2523 log.go:172] (0xc000a604d0) Data frame received for 1\nI0521 00:03:18.678965 2523 log.go:172] (0xc00083e5a0) (1) Data frame handling\nI0521 00:03:18.678980 2523 log.go:172] (0xc00083e5a0) (1) Data frame sent\nI0521 00:03:18.678993 2523 log.go:172] (0xc000a604d0) (0xc00083e5a0) Stream removed, broadcasting: 1\nI0521 00:03:18.679199 2523 log.go:172] (0xc000a604d0) Go away received\nI0521 00:03:18.679267 2523 log.go:172] (0xc000a604d0) (0xc00083e5a0) Stream removed, broadcasting: 1\nI0521 00:03:18.679287 2523 log.go:172] (0xc000a604d0) (0xc00083ee60) Stream removed, broadcasting: 3\nI0521 00:03:18.679293 2523 log.go:172] (0xc000a604d0) (0xc00083f360) Stream removed, broadcasting: 5\n" May 21 00:03:18.683: INFO: stdout: "" May 21 00:03:18.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2172 execpodhf9dc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31055' May 21 00:03:18.916: INFO: stderr: "I0521 00:03:18.833486 2543 log.go:172] (0xc000a66630) (0xc00054e960) Create stream\nI0521 00:03:18.833702 2543 log.go:172] (0xc000a66630) (0xc00054e960) Stream added, broadcasting: 1\nI0521 00:03:18.836412 2543 log.go:172] (0xc000a66630) Reply frame received for 1\nI0521 00:03:18.836449 2543 log.go:172] (0xc000a66630) (0xc0004ff7c0) Create stream\nI0521 00:03:18.836461 2543 log.go:172] (0xc000a66630) (0xc0004ff7c0) Stream added, broadcasting: 3\nI0521 00:03:18.837550 2543 log.go:172] (0xc000a66630) Reply frame received for 3\nI0521 00:03:18.837586 2543 log.go:172] (0xc000a66630) (0xc00043a640) Create stream\nI0521 00:03:18.837596 2543 log.go:172] (0xc000a66630) (0xc00043a640) Stream added, broadcasting: 5\nI0521 00:03:18.838618 2543 log.go:172] (0xc000a66630) Reply frame received for 5\nI0521 00:03:18.909032 2543 log.go:172] (0xc000a66630) Data frame received for 5\nI0521 00:03:18.909063 2543 log.go:172] (0xc00043a640) (5) Data frame handling\nI0521 00:03:18.909091 2543 log.go:172] (0xc00043a640) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31055\nConnection to 172.17.0.13 31055 port [tcp/31055] succeeded!\nI0521 00:03:18.909860 2543 log.go:172] (0xc000a66630) Data frame received for 5\nI0521 00:03:18.909887 2543 log.go:172] (0xc00043a640) (5) Data frame handling\nI0521 00:03:18.909910 2543 log.go:172] (0xc000a66630) Data frame received for 3\nI0521 00:03:18.909944 2543 log.go:172] (0xc0004ff7c0) (3) Data frame handling\nI0521 00:03:18.911173 2543 log.go:172] (0xc000a66630) Data frame received for 1\nI0521 00:03:18.911203 2543 log.go:172] (0xc00054e960) (1) Data frame handling\nI0521 00:03:18.911227 2543 log.go:172] (0xc00054e960) (1) Data frame sent\nI0521 00:03:18.911254 2543 log.go:172] (0xc000a66630) (0xc00054e960) Stream removed, broadcasting: 1\nI0521 00:03:18.911287 2543 log.go:172] (0xc000a66630) Go away received\nI0521 00:03:18.911604 2543 log.go:172] (0xc000a66630) (0xc00054e960) Stream removed, broadcasting: 1\nI0521 00:03:18.911620 2543 log.go:172] (0xc000a66630) (0xc0004ff7c0) Stream removed, broadcasting: 3\nI0521 00:03:18.911627 2543 log.go:172] (0xc000a66630) (0xc00043a640) Stream removed, broadcasting: 5\n" May 21 00:03:18.916: INFO: stdout: "" May 21 00:03:18.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2172 execpodhf9dc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31055' May 21 00:03:19.118: INFO: stderr: "I0521 00:03:19.051540 2563 log.go:172] (0xc0009108f0) (0xc000bf6460) Create stream\nI0521 00:03:19.051597 2563 log.go:172] (0xc0009108f0) (0xc000bf6460) Stream added, broadcasting: 1\nI0521 00:03:19.054338 2563 log.go:172] (0xc0009108f0) Reply frame received for 1\nI0521 00:03:19.054398 2563 log.go:172] (0xc0009108f0) (0xc000512e60) Create stream\nI0521 00:03:19.054412 2563 log.go:172] (0xc0009108f0) (0xc000512e60) Stream added, broadcasting: 3\nI0521 00:03:19.055379 2563 log.go:172] (0xc0009108f0) Reply frame received for 3\nI0521 00:03:19.055422 2563 log.go:172] (0xc0009108f0) (0xc0006f8000) Create stream\nI0521 00:03:19.055439 2563 log.go:172] (0xc0009108f0) (0xc0006f8000) Stream added, broadcasting: 5\nI0521 00:03:19.056370 2563 log.go:172] (0xc0009108f0) Reply frame received for 5\nI0521 00:03:19.108728 2563 log.go:172] (0xc0009108f0) Data frame received for 5\nI0521 00:03:19.108768 2563 log.go:172] (0xc0006f8000) (5) Data frame handling\nI0521 00:03:19.108805 2563 log.go:172] (0xc0006f8000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31055\nI0521 00:03:19.108902 2563 log.go:172] (0xc0009108f0) Data frame received for 5\nI0521 00:03:19.108918 2563 log.go:172] (0xc0006f8000) (5) Data frame handling\nI0521 00:03:19.108928 2563 log.go:172] (0xc0006f8000) (5) Data frame sent\nConnection to 172.17.0.12 31055 port [tcp/31055] succeeded!\nI0521 00:03:19.109376 2563 log.go:172] (0xc0009108f0) Data frame received for 3\nI0521 00:03:19.109405 2563 log.go:172] (0xc000512e60) (3) Data frame handling\nI0521 00:03:19.109428 2563 log.go:172] (0xc0009108f0) Data frame received for 5\nI0521 00:03:19.109455 2563 log.go:172] (0xc0006f8000) (5) Data frame handling\nI0521 00:03:19.111563 2563 log.go:172] (0xc0009108f0) Data frame received for 1\nI0521 00:03:19.111584 2563 log.go:172] (0xc000bf6460) (1) Data frame handling\nI0521 00:03:19.111594 2563 log.go:172] (0xc000bf6460) (1) Data frame sent\nI0521 00:03:19.111616 2563 log.go:172] (0xc0009108f0) (0xc000bf6460) Stream removed, broadcasting: 1\nI0521 00:03:19.111633 2563 log.go:172] (0xc0009108f0) Go away received\nI0521 00:03:19.112017 2563 log.go:172] (0xc0009108f0) (0xc000bf6460) Stream removed, broadcasting: 1\nI0521 00:03:19.112040 2563 log.go:172] (0xc0009108f0) (0xc000512e60) Stream removed, broadcasting: 3\nI0521 00:03:19.112052 2563 log.go:172] (0xc0009108f0) (0xc0006f8000) Stream removed, broadcasting: 5\n" May 21 00:03:19.118: INFO: stdout: "" May 21 00:03:19.118: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:03:19.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2172" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.304 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":98,"skipped":1753,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:03:19.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2970, will wait for the garbage collector to delete the pods May 21 00:03:25.411: INFO: Deleting Job.batch foo took: 5.476484ms May 21 00:03:25.512: INFO: Terminating Job.batch foo pods took: 100.184228ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:04:05.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2970" for this suite. • [SLOW TEST:46.134 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":99,"skipped":1775,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:04:05.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:04:05.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30" in namespace "projected-5551" to be "Succeeded or Failed" May 21 00:04:05.428: INFO: Pod "downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30": Phase="Pending", Reason="", readiness=false. Elapsed: 25.60227ms May 21 00:04:07.431: INFO: Pod "downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029345036s May 21 00:04:09.436: INFO: Pod "downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033649722s STEP: Saw pod success May 21 00:04:09.436: INFO: Pod "downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30" satisfied condition "Succeeded or Failed" May 21 00:04:09.438: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30 container client-container: STEP: delete the pod May 21 00:04:09.476: INFO: Waiting for pod downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30 to disappear May 21 00:04:09.481: INFO: Pod downwardapi-volume-0f8a6dbe-a365-4971-bbf0-57e8b1eb7e30 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:04:09.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5551" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":100,"skipped":1777,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:04:09.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 21 00:04:09.540: INFO: Waiting up to 5m0s for pod "pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14" in namespace "emptydir-4725" to be "Succeeded or Failed" May 21 00:04:09.543: INFO: Pod "pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.745771ms May 21 00:04:11.547: INFO: Pod "pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006952631s May 21 00:04:13.599: INFO: Pod "pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058623613s STEP: Saw pod success May 21 00:04:13.599: INFO: Pod "pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14" satisfied condition "Succeeded or Failed" May 21 00:04:13.601: INFO: Trying to get logs from node latest-worker2 pod pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14 container test-container: STEP: delete the pod May 21 00:04:13.886: INFO: Waiting for pod pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14 to disappear May 21 00:04:13.935: INFO: Pod pod-032c3fba-a5be-41a3-87f0-b04a19ef5d14 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:04:13.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4725" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":101,"skipped":1779,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:04:13.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-d4439a95-1345-4ec4-88e4-01710bbdd709 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d4439a95-1345-4ec4-88e4-01710bbdd709 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:04:22.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5465" for this suite. • [SLOW TEST:8.289 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":102,"skipped":1787,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:04:22.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-9d01aa6e-5e0b-4c29-8698-b5a8e9f3d7b9 STEP: Creating configMap with name cm-test-opt-upd-64864e9e-223a-448d-b622-00e96344602c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9d01aa6e-5e0b-4c29-8698-b5a8e9f3d7b9 STEP: Updating configmap cm-test-opt-upd-64864e9e-223a-448d-b622-00e96344602c STEP: Creating configMap with name cm-test-opt-create-3fc710a3-12a2-4a28-a86b-48b95d1021f6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:05:34.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7651" for this suite. • [SLOW TEST:72.496 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":103,"skipped":1792,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:05:34.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:05:38.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4773" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":104,"skipped":1803,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:05:38.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 00:05:38.899: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 00:05:38.922: INFO: Waiting for terminating namespaces to be deleted... May 21 00:05:38.925: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 21 00:05:38.930: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 21 00:05:38.930: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 21 00:05:38.930: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 21 00:05:38.930: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 21 00:05:38.930: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:05:38.930: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:05:38.930: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:05:38.930: INFO: Container kube-proxy ready: true, restart count 0 May 21 00:05:38.930: INFO: busybox-host-aliases26b627ff-7a8d-44c9-b00d-1b6216b7d1dd from kubelet-test-4773 started at 2020-05-21 00:05:34 +0000 UTC (1 container statuses recorded) May 21 00:05:38.930: INFO: Container busybox-host-aliases26b627ff-7a8d-44c9-b00d-1b6216b7d1dd ready: true, restart count 0 May 21 00:05:38.930: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 21 00:05:38.934: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 21 00:05:38.934: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 21 00:05:38.934: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 21 00:05:38.934: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 21 00:05:38.934: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:05:38.934: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:05:38.934: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:05:38.934: INFO: Container kube-proxy ready: true, restart count 0 May 21 00:05:38.934: INFO: pod-projected-configmaps-2fab60a2-6552-4306-ac47-626539622e02 from projected-7651 started at 2020-05-21 00:04:22 +0000 UTC (3 container statuses recorded) May 21 00:05:38.934: INFO: Container createcm-volume-test ready: true, restart count 0 May 21 00:05:38.934: INFO: Container delcm-volume-test ready: true, restart count 0 May 21 00:05:38.934: INFO: Container updcm-volume-test ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0fbc369f-3097-437d-884e-4ae842f34b80 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-0fbc369f-3097-437d-884e-4ae842f34b80 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-0fbc369f-3097-437d-884e-4ae842f34b80 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:10:47.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8682" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.285 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":105,"skipped":1824,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:10:47.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 21 00:10:47.223: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351805 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:10:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:10:47.223: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351805 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:10:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 21 00:10:57.228: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351855 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:10:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:10:57.228: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351855 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:10:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 21 00:11:07.238: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351887 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:11:07.238: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351887 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 21 00:11:17.244: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351919 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:11:17.244: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-a 57d78f5b-0dc5-48a9-b3d4-09e08e4c5005 6351919 0 2020-05-21 00:10:47 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:07 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 21 00:11:27.253: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-b 9552ee04-ec0b-467e-b027-565fc9e95b67 6351949 0 2020-05-21 00:11:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:11:27.253: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-b 9552ee04-ec0b-467e-b027-565fc9e95b67 6351949 0 2020-05-21 00:11:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 21 00:11:37.261: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-b 9552ee04-ec0b-467e-b027-565fc9e95b67 6351978 0 2020-05-21 00:11:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:11:37.261: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7604 /api/v1/namespaces/watch-7604/configmaps/e2e-watch-test-configmap-b 9552ee04-ec0b-467e-b027-565fc9e95b67 6351978 0 2020-05-21 00:11:27 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-21 00:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:11:47.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7604" for this suite. • [SLOW TEST:60.133 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":106,"skipped":1830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:11:47.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 21 00:11:51.900: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6666 pod-service-account-af4cb901-38fb-44a8-b78b-7f886f083809 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 21 00:11:55.152: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6666 pod-service-account-af4cb901-38fb-44a8-b78b-7f886f083809 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 21 00:11:55.366: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6666 pod-service-account-af4cb901-38fb-44a8-b78b-7f886f083809 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:11:55.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6666" for this suite. • [SLOW TEST:8.310 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":107,"skipped":1865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:11:55.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0521 00:12:36.654451 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 21 00:12:36.654: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:12:36.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2928" for this suite. • [SLOW TEST:41.078 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":108,"skipped":1955,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:12:36.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 21 00:12:36.731: INFO: Waiting up to 5m0s for pod "var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3" in namespace "var-expansion-8869" to be "Succeeded or Failed" May 21 00:12:36.775: INFO: Pod "var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 44.586402ms May 21 00:12:38.780: INFO: Pod "var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049066509s May 21 00:12:40.783: INFO: Pod "var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052473193s STEP: Saw pod success May 21 00:12:40.783: INFO: Pod "var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3" satisfied condition "Succeeded or Failed" May 21 00:12:40.785: INFO: Trying to get logs from node latest-worker pod var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3 container dapi-container: STEP: delete the pod May 21 00:12:40.863: INFO: Waiting for pod var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3 to disappear May 21 00:12:40.895: INFO: Pod var-expansion-f9df1cb2-6f66-40ac-80e3-146503f0f6d3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:12:40.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8869" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1974,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:12:40.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 21 00:12:41.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9308' May 21 00:12:41.249: INFO: stderr: "" May 21 00:12:41.249: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 21 00:12:46.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9308 -o json' May 21 00:12:46.560: INFO: stderr: "" May 21 00:12:46.560: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-21T00:12:41Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-21T00:12:41Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.136\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-21T00:12:44Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9308\",\n \"resourceVersion\": \"6352411\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9308/pods/e2e-test-httpd-pod\",\n \"uid\": \"b48bc750-c3cf-48cb-8b84-daa51bb02076\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-676c4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-676c4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-676c4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-21T00:12:41Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-21T00:12:44Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-21T00:12:44Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-21T00:12:41Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://1174fb394741f84349c67af45d4df75ab32c5731b61c4180e0754657e9f098f9\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-21T00:12:44Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.136\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.136\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-21T00:12:41Z\"\n }\n}\n" STEP: replace the image in the pod May 21 00:12:46.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9308' May 21 00:12:47.062: INFO: stderr: "" May 21 00:12:47.062: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 21 00:12:47.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9308' May 21 00:12:54.859: INFO: stderr: "" May 21 00:12:54.859: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:12:54.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9308" for this suite. • [SLOW TEST:13.985 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":110,"skipped":1979,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:12:54.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:12:55.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-3384" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":111,"skipped":1993,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:12:55.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-a3e4d3ec-2537-4d4b-a4bc-b641a2e4a8fd in namespace container-probe-6401 May 21 00:12:59.181: INFO: Started pod liveness-a3e4d3ec-2537-4d4b-a4bc-b641a2e4a8fd in namespace container-probe-6401 STEP: checking the pod's current state and verifying that restartCount is present May 21 00:12:59.184: INFO: Initial restart count of pod liveness-a3e4d3ec-2537-4d4b-a4bc-b641a2e4a8fd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:16:59.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6401" for this suite. • [SLOW TEST:244.801 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":2004,"failed":0} S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:16:59.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 00:17:00.296: INFO: Waiting up to 5m0s for pod "downward-api-d6f23ce3-f813-4708-9ea6-779c96679668" in namespace "downward-api-6434" to be "Succeeded or Failed" May 21 00:17:00.324: INFO: Pod "downward-api-d6f23ce3-f813-4708-9ea6-779c96679668": Phase="Pending", Reason="", readiness=false. Elapsed: 28.221325ms May 21 00:17:02.328: INFO: Pod "downward-api-d6f23ce3-f813-4708-9ea6-779c96679668": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0324537s May 21 00:17:04.333: INFO: Pod "downward-api-d6f23ce3-f813-4708-9ea6-779c96679668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037302693s STEP: Saw pod success May 21 00:17:04.333: INFO: Pod "downward-api-d6f23ce3-f813-4708-9ea6-779c96679668" satisfied condition "Succeeded or Failed" May 21 00:17:04.336: INFO: Trying to get logs from node latest-worker2 pod downward-api-d6f23ce3-f813-4708-9ea6-779c96679668 container dapi-container: STEP: delete the pod May 21 00:17:04.382: INFO: Waiting for pod downward-api-d6f23ce3-f813-4708-9ea6-779c96679668 to disappear May 21 00:17:04.435: INFO: Pod downward-api-d6f23ce3-f813-4708-9ea6-779c96679668 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:17:04.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6434" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":2005,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:17:04.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 21 00:17:04.531: INFO: Waiting up to 5m0s for pod "client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa" in namespace "containers-555" to be "Succeeded or Failed" May 21 00:17:04.535: INFO: Pod "client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019605ms May 21 00:17:06.567: INFO: Pod "client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036084575s May 21 00:17:08.572: INFO: Pod "client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040544266s STEP: Saw pod success May 21 00:17:08.572: INFO: Pod "client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa" satisfied condition "Succeeded or Failed" May 21 00:17:08.575: INFO: Trying to get logs from node latest-worker pod client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa container test-container: STEP: delete the pod May 21 00:17:08.745: INFO: Waiting for pod client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa to disappear May 21 00:17:08.789: INFO: Pod client-containers-395fda5a-466e-48c7-b613-6310c5eda9fa no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:17:08.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-555" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":114,"skipped":2016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:17:08.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 21 00:17:08.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7519' May 21 00:17:08.969: INFO: stderr: "" May 21 00:17:08.969: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 21 00:17:08.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7519' May 21 00:17:25.264: INFO: stderr: "" May 21 00:17:25.264: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:17:25.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7519" for this suite. • [SLOW TEST:16.478 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":115,"skipped":2046,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:17:25.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-2476fd94-109f-476b-9be6-d4481871226a STEP: Creating secret with name s-test-opt-upd-4c45d8ba-1ee2-4e94-860e-2bcf55fe5bd3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2476fd94-109f-476b-9be6-d4481871226a STEP: Updating secret s-test-opt-upd-4c45d8ba-1ee2-4e94-860e-2bcf55fe5bd3 STEP: Creating secret with name s-test-opt-create-d8a071b3-b9c3-4dbd-9fc8-a81c5beabbbf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:17:35.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7463" for this suite. • [SLOW TEST:10.324 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":116,"skipped":2051,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:17:35.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 21 00:17:35.750: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix338726858/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:17:35.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5889" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":117,"skipped":2055,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:17:35.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 21 00:17:40.029: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:17:40.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1557" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":118,"skipped":2064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:17:40.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7rc5r in namespace proxy-3037 I0521 00:17:40.425333 8 runners.go:190] Created replication controller with name: proxy-service-7rc5r, namespace: proxy-3037, replica count: 1 I0521 00:17:41.475707 8 runners.go:190] proxy-service-7rc5r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:17:42.475971 8 runners.go:190] proxy-service-7rc5r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:17:43.476206 8 runners.go:190] proxy-service-7rc5r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:17:44.476471 8 runners.go:190] proxy-service-7rc5r Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:17:45.476761 8 runners.go:190] proxy-service-7rc5r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0521 00:17:46.476987 8 runners.go:190] proxy-service-7rc5r Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0521 00:17:47.477387 8 runners.go:190] proxy-service-7rc5r Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 00:17:47.480: INFO: setup took 7.180340259s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 21 00:17:47.486: INFO: (0) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 5.92785ms) May 21 00:17:47.493: INFO: (0) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 12.117127ms) May 21 00:17:47.493: INFO: (0) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 12.348134ms) May 21 00:17:47.494: INFO: (0) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 13.207011ms) May 21 00:17:47.494: INFO: (0) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 13.273501ms) May 21 00:17:47.494: INFO: (0) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 14.022567ms) May 21 00:17:47.494: INFO: (0) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 13.936421ms) May 21 00:17:47.494: INFO: (0) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 14.191558ms) May 21 00:17:47.495: INFO: (0) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 14.190645ms) May 21 00:17:47.495: INFO: (0) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 14.348957ms) May 21 00:17:47.498: INFO: (0) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 17.472705ms) May 21 00:17:47.508: INFO: (0) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 27.834657ms) May 21 00:17:47.508: INFO: (0) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: ... (200; 5.490234ms) May 21 00:17:47.516: INFO: (1) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 7.945173ms) May 21 00:17:47.516: INFO: (1) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 8.056685ms) May 21 00:17:47.516: INFO: (1) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 8.050723ms) May 21 00:17:47.517: INFO: (1) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 8.511198ms) May 21 00:17:47.517: INFO: (1) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 8.470958ms) May 21 00:17:47.517: INFO: (1) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 8.480417ms) May 21 00:17:47.517: INFO: (1) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 8.546863ms) May 21 00:17:47.517: INFO: (1) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 8.718188ms) May 21 00:17:47.517: INFO: (1) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 8.522499ms) May 21 00:17:47.517: INFO: (1) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 8.879646ms) May 21 00:17:47.518: INFO: (1) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 9.093353ms) May 21 00:17:47.518: INFO: (1) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: ... (200; 4.848878ms) May 21 00:17:47.523: INFO: (2) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.981228ms) May 21 00:17:47.523: INFO: (2) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 5.208321ms) May 21 00:17:47.523: INFO: (2) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 5.565226ms) May 21 00:17:47.523: INFO: (2) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 5.590357ms) May 21 00:17:47.524: INFO: (2) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test (200; 6.25259ms) May 21 00:17:47.524: INFO: (2) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 6.373188ms) May 21 00:17:47.524: INFO: (2) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 6.410033ms) May 21 00:17:47.524: INFO: (2) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 6.310237ms) May 21 00:17:47.524: INFO: (2) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 6.302876ms) May 21 00:17:47.524: INFO: (2) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 6.446289ms) May 21 00:17:47.524: INFO: (2) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 6.778061ms) May 21 00:17:47.527: INFO: (3) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 3.135743ms) May 21 00:17:47.528: INFO: (3) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 3.395616ms) May 21 00:17:47.528: INFO: (3) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 3.382856ms) May 21 00:17:47.528: INFO: (3) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.440071ms) May 21 00:17:47.529: INFO: (3) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 4.695168ms) May 21 00:17:47.530: INFO: (3) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 4.838183ms) May 21 00:17:47.530: INFO: (3) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 5.225516ms) May 21 00:17:47.530: INFO: (3) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.987557ms) May 21 00:17:47.530: INFO: (3) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 5.199955ms) May 21 00:17:47.530: INFO: (3) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 5.073939ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 5.849118ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 6.115983ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 6.278651ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 6.302493ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 6.302686ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 6.300033ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 6.32459ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 6.379466ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 6.412485ms) May 21 00:17:47.536: INFO: (4) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 6.34273ms) May 21 00:17:47.537: INFO: (4) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 6.602643ms) May 21 00:17:47.537: INFO: (4) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 6.644136ms) May 21 00:17:47.537: INFO: (4) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 6.701067ms) May 21 00:17:47.537: INFO: (4) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 6.633401ms) May 21 00:17:47.537: INFO: (4) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 7.00013ms) May 21 00:17:47.541: INFO: (5) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 3.553942ms) May 21 00:17:47.541: INFO: (5) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 4.036807ms) May 21 00:17:47.541: INFO: (5) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 4.26847ms) May 21 00:17:47.541: INFO: (5) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 4.468706ms) May 21 00:17:47.541: INFO: (5) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 4.541609ms) May 21 00:17:47.541: INFO: (5) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.535865ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: ... (200; 4.665847ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 5.167247ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 5.179694ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 5.249749ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 5.259791ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 5.489392ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 5.481722ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 5.518056ms) May 21 00:17:47.542: INFO: (5) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 5.534966ms) May 21 00:17:47.545: INFO: (6) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 2.232198ms) May 21 00:17:47.545: INFO: (6) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 2.330205ms) May 21 00:17:47.545: INFO: (6) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 2.59337ms) May 21 00:17:47.545: INFO: (6) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 2.526881ms) May 21 00:17:47.547: INFO: (6) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 4.486279ms) May 21 00:17:47.547: INFO: (6) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 4.551042ms) May 21 00:17:47.547: INFO: (6) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 4.63147ms) May 21 00:17:47.547: INFO: (6) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.718615ms) May 21 00:17:47.548: INFO: (6) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 4.980341ms) May 21 00:17:47.548: INFO: (6) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 5.067543ms) May 21 00:17:47.548: INFO: (6) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 5.098344ms) May 21 00:17:47.548: INFO: (6) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 5.240363ms) May 21 00:17:47.548: INFO: (6) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 5.282246ms) May 21 00:17:47.548: INFO: (6) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 5.273701ms) May 21 00:17:47.552: INFO: (7) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 3.145975ms) May 21 00:17:47.552: INFO: (7) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 3.107435ms) May 21 00:17:47.552: INFO: (7) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test (200; 5.189844ms) May 21 00:17:47.553: INFO: (7) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 5.311289ms) May 21 00:17:47.553: INFO: (7) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 4.60158ms) May 21 00:17:47.553: INFO: (7) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 4.54236ms) May 21 00:17:47.553: INFO: (7) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 5.449501ms) May 21 00:17:47.554: INFO: (7) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 4.440165ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.147488ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 4.267135ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 4.212657ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 4.025703ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.012614ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 3.983871ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 4.270751ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 4.372186ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 4.393467ms) May 21 00:17:47.558: INFO: (8) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: ... (200; 4.465702ms) May 21 00:17:47.564: INFO: (9) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.468022ms) May 21 00:17:47.564: INFO: (9) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 4.479909ms) May 21 00:17:47.564: INFO: (9) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 4.500373ms) May 21 00:17:47.564: INFO: (9) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 5.146974ms) May 21 00:17:47.565: INFO: (9) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 5.084818ms) May 21 00:17:47.565: INFO: (9) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 5.174616ms) May 21 00:17:47.565: INFO: (9) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 5.113573ms) May 21 00:17:47.565: INFO: (9) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 5.220495ms) May 21 00:17:47.568: INFO: (10) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 3.13149ms) May 21 00:17:47.569: INFO: (10) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test (200; 4.720597ms) May 21 00:17:47.570: INFO: (10) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 5.160304ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 5.317485ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 5.381258ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 5.36515ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 5.547265ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 5.478151ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 5.550282ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 5.527023ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 5.745852ms) May 21 00:17:47.571: INFO: (10) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 5.906487ms) May 21 00:17:47.574: INFO: (11) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 3.03106ms) May 21 00:17:47.574: INFO: (11) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.229967ms) May 21 00:17:47.574: INFO: (11) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.285488ms) May 21 00:17:47.575: INFO: (11) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 3.431446ms) May 21 00:17:47.575: INFO: (11) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 3.80198ms) May 21 00:17:47.575: INFO: (11) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 3.968344ms) May 21 00:17:47.575: INFO: (11) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.019635ms) May 21 00:17:47.575: INFO: (11) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 4.090502ms) May 21 00:17:47.575: INFO: (11) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.227004ms) May 21 00:17:47.575: INFO: (11) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 4.37243ms) May 21 00:17:47.576: INFO: (11) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 4.339601ms) May 21 00:17:47.576: INFO: (11) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 4.50507ms) May 21 00:17:47.576: INFO: (11) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 4.48435ms) May 21 00:17:47.576: INFO: (11) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test (200; 3.353306ms) May 21 00:17:47.579: INFO: (12) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 3.372013ms) May 21 00:17:47.579: INFO: (12) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.384125ms) May 21 00:17:47.579: INFO: (12) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 3.481326ms) May 21 00:17:47.580: INFO: (12) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 4.154613ms) May 21 00:17:47.580: INFO: (12) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 4.26361ms) May 21 00:17:47.581: INFO: (12) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 5.321867ms) May 21 00:17:47.582: INFO: (12) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 5.732105ms) May 21 00:17:47.582: INFO: (12) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 5.834982ms) May 21 00:17:47.582: INFO: (12) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 5.958157ms) May 21 00:17:47.582: INFO: (12) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 6.015813ms) May 21 00:17:47.582: INFO: (12) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 6.297878ms) May 21 00:17:47.589: INFO: (12) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 13.301492ms) May 21 00:17:47.593: INFO: (13) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 3.564514ms) May 21 00:17:47.593: INFO: (13) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.833507ms) May 21 00:17:47.593: INFO: (13) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.925261ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 5.029692ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 5.066742ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 5.16661ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 5.100618ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 5.184037ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 5.103457ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 5.105328ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 5.266796ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 5.152197ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 5.195003ms) May 21 00:17:47.594: INFO: (13) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test (200; 6.289531ms) May 21 00:17:47.601: INFO: (14) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 6.282157ms) May 21 00:17:47.601: INFO: (14) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 10.136633ms) May 21 00:17:47.606: INFO: (14) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 11.036085ms) May 21 00:17:47.606: INFO: (14) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 11.084459ms) May 21 00:17:47.606: INFO: (14) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 11.178677ms) May 21 00:17:47.609: INFO: (15) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 3.463384ms) May 21 00:17:47.610: INFO: (15) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 3.884164ms) May 21 00:17:47.611: INFO: (15) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.89792ms) May 21 00:17:47.611: INFO: (15) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 5.183653ms) May 21 00:17:47.611: INFO: (15) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 5.225462ms) May 21 00:17:47.611: INFO: (15) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 5.206518ms) May 21 00:17:47.611: INFO: (15) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 5.256485ms) May 21 00:17:47.612: INFO: (15) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 6.32154ms) May 21 00:17:47.612: INFO: (15) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 6.522771ms) May 21 00:17:47.612: INFO: (15) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 6.506462ms) May 21 00:17:47.613: INFO: (15) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 6.518735ms) May 21 00:17:47.613: INFO: (15) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 6.563157ms) May 21 00:17:47.613: INFO: (15) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: ... (200; 2.972335ms) May 21 00:17:47.616: INFO: (16) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 3.075661ms) May 21 00:17:47.616: INFO: (16) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 3.361764ms) May 21 00:17:47.616: INFO: (16) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.437165ms) May 21 00:17:47.616: INFO: (16) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 3.49185ms) May 21 00:17:47.617: INFO: (16) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 3.614184ms) May 21 00:17:47.617: INFO: (16) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 3.797773ms) May 21 00:17:47.617: INFO: (16) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 3.798636ms) May 21 00:17:47.617: INFO: (16) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 3.803943ms) May 21 00:17:47.617: INFO: (16) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 3.90079ms) May 21 00:17:47.617: INFO: (16) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 3.842244ms) May 21 00:17:47.620: INFO: (17) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 3.322595ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 3.997681ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.968075ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 4.07411ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 4.268004ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.436508ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.389373ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 4.388293ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test (200; 4.416101ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 4.461729ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.444441ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 4.434677ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 4.522137ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 4.477393ms) May 21 00:17:47.621: INFO: (17) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 4.49705ms) May 21 00:17:47.624: INFO: (18) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:1080/proxy/: test<... (200; 2.411337ms) May 21 00:17:47.624: INFO: (18) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 2.438778ms) May 21 00:17:47.624: INFO: (18) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:462/proxy/: tls qux (200; 2.890321ms) May 21 00:17:47.625: INFO: (18) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.27446ms) May 21 00:17:47.625: INFO: (18) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 3.302856ms) May 21 00:17:47.625: INFO: (18) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 3.512167ms) May 21 00:17:47.626: INFO: (18) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.188332ms) May 21 00:17:47.626: INFO: (18) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 4.23148ms) May 21 00:17:47.626: INFO: (18) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname2/proxy/: tls qux (200; 4.48518ms) May 21 00:17:47.626: INFO: (18) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname1/proxy/: foo (200; 4.569611ms) May 21 00:17:47.626: INFO: (18) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 4.649379ms) May 21 00:17:47.626: INFO: (18) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: test<... (200; 3.428536ms) May 21 00:17:47.630: INFO: (19) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr/proxy/: test (200; 3.638227ms) May 21 00:17:47.630: INFO: (19) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 3.769806ms) May 21 00:17:47.630: INFO: (19) /api/v1/namespaces/proxy-3037/pods/proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 3.837436ms) May 21 00:17:47.630: INFO: (19) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:460/proxy/: tls baz (200; 3.832451ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:1080/proxy/: ... (200; 3.900895ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname1/proxy/: foo (200; 4.046357ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:162/proxy/: bar (200; 4.122976ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/services/http:proxy-service-7rc5r:portname2/proxy/: bar (200; 4.12517ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/services/https:proxy-service-7rc5r:tlsportname1/proxy/: tls baz (200; 4.188848ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/pods/http:proxy-service-7rc5r-xjrwr:160/proxy/: foo (200; 4.167019ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/services/proxy-service-7rc5r:portname2/proxy/: bar (200; 4.481667ms) May 21 00:17:47.631: INFO: (19) /api/v1/namespaces/proxy-3037/pods/https:proxy-service-7rc5r-xjrwr:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 21 00:17:55.423: INFO: Waiting up to 5m0s for pod "pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6" in namespace "emptydir-1965" to be "Succeeded or Failed" May 21 00:17:55.446: INFO: Pod "pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.976634ms May 21 00:17:57.450: INFO: Pod "pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026474667s May 21 00:17:59.454: INFO: Pod "pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030515875s STEP: Saw pod success May 21 00:17:59.454: INFO: Pod "pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6" satisfied condition "Succeeded or Failed" May 21 00:17:59.457: INFO: Trying to get logs from node latest-worker pod pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6 container test-container: STEP: delete the pod May 21 00:17:59.491: INFO: Waiting for pod pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6 to disappear May 21 00:17:59.507: INFO: Pod pod-023f28f3-d8b2-4d95-8f0a-b7b008bde4c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:17:59.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1965" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":120,"skipped":2094,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:17:59.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:17:59.584: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 21 00:18:00.902: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:18:01.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6866" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":121,"skipped":2107,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:18:01.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 21 00:18:08.965: INFO: Successfully updated pod "pod-update-activedeadlineseconds-1d38fd2d-dada-4149-b617-205a87dfc270" May 21 00:18:08.965: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-1d38fd2d-dada-4149-b617-205a87dfc270" in namespace "pods-7462" to be "terminated due to deadline exceeded" May 21 00:18:08.991: INFO: Pod "pod-update-activedeadlineseconds-1d38fd2d-dada-4149-b617-205a87dfc270": Phase="Running", Reason="", readiness=true. Elapsed: 25.65336ms May 21 00:18:10.996: INFO: Pod "pod-update-activedeadlineseconds-1d38fd2d-dada-4149-b617-205a87dfc270": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.030401548s May 21 00:18:10.996: INFO: Pod "pod-update-activedeadlineseconds-1d38fd2d-dada-4149-b617-205a87dfc270" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:18:10.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7462" for this suite. • [SLOW TEST:9.086 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":122,"skipped":2114,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:18:11.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:18:11.119: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"a40f73a5-c1e9-4e5e-8391-9d7dde807143", Controller:(*bool)(0xc002fb316a), BlockOwnerDeletion:(*bool)(0xc002fb316b)}} May 21 00:18:11.217: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6f307295-f8fd-45c9-94c0-f8bace83cb25", Controller:(*bool)(0xc002fb3362), BlockOwnerDeletion:(*bool)(0xc002fb3363)}} May 21 00:18:11.241: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c1089c6d-68c0-4407-90a4-f0f83012594c", Controller:(*bool)(0xc0032965b2), BlockOwnerDeletion:(*bool)(0xc0032965b3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:18:16.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7825" for this suite. • [SLOW TEST:5.321 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":123,"skipped":2129,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:18:16.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0521 00:18:17.132936 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 21 00:18:17.132: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:18:17.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3917" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":124,"skipped":2131,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:18:17.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-m8vl STEP: Creating a pod to test atomic-volume-subpath May 21 00:18:17.415: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-m8vl" in namespace "subpath-8453" to be "Succeeded or Failed" May 21 00:18:17.418: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.706951ms May 21 00:18:19.422: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007450991s May 21 00:18:21.425: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010678831s May 21 00:18:23.430: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 6.015198816s May 21 00:18:25.434: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 8.019573959s May 21 00:18:27.438: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 10.023707602s May 21 00:18:29.443: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 12.028276223s May 21 00:18:31.447: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 14.032557175s May 21 00:18:33.451: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 16.036569058s May 21 00:18:35.456: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 18.040961606s May 21 00:18:37.459: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 20.044165857s May 21 00:18:39.485: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 22.070334911s May 21 00:18:41.539: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Running", Reason="", readiness=true. Elapsed: 24.12443504s May 21 00:18:43.543: INFO: Pod "pod-subpath-test-downwardapi-m8vl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.128507193s STEP: Saw pod success May 21 00:18:43.543: INFO: Pod "pod-subpath-test-downwardapi-m8vl" satisfied condition "Succeeded or Failed" May 21 00:18:43.546: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-m8vl container test-container-subpath-downwardapi-m8vl: STEP: delete the pod May 21 00:18:43.661: INFO: Waiting for pod pod-subpath-test-downwardapi-m8vl to disappear May 21 00:18:43.665: INFO: Pod pod-subpath-test-downwardapi-m8vl no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-m8vl May 21 00:18:43.665: INFO: Deleting pod "pod-subpath-test-downwardapi-m8vl" in namespace "subpath-8453" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:18:43.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8453" for this suite. • [SLOW TEST:26.533 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":125,"skipped":2135,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:18:43.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:18:44.492: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:18:46.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617124, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617124, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617124, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617124, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:18:49.671: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:18:50.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-767" for this suite. STEP: Destroying namespace "webhook-767-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.669 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":126,"skipped":2149,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:18:50.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 21 00:18:51.874: INFO: Pod name wrapped-volume-race-230d9473-c639-4404-84f1-b5c558daf121: Found 0 pods out of 5 May 21 00:18:56.882: INFO: Pod name wrapped-volume-race-230d9473-c639-4404-84f1-b5c558daf121: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-230d9473-c639-4404-84f1-b5c558daf121 in namespace emptydir-wrapper-9335, will wait for the garbage collector to delete the pods May 21 00:19:09.335: INFO: Deleting ReplicationController wrapped-volume-race-230d9473-c639-4404-84f1-b5c558daf121 took: 9.42907ms May 21 00:19:09.635: INFO: Terminating ReplicationController wrapped-volume-race-230d9473-c639-4404-84f1-b5c558daf121 pods took: 300.200919ms STEP: Creating RC which spawns configmap-volume pods May 21 00:19:25.071: INFO: Pod name wrapped-volume-race-985e0f23-b6bd-4621-b090-d2abcee2a335: Found 0 pods out of 5 May 21 00:19:30.080: INFO: Pod name wrapped-volume-race-985e0f23-b6bd-4621-b090-d2abcee2a335: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-985e0f23-b6bd-4621-b090-d2abcee2a335 in namespace emptydir-wrapper-9335, will wait for the garbage collector to delete the pods May 21 00:19:42.184: INFO: Deleting ReplicationController wrapped-volume-race-985e0f23-b6bd-4621-b090-d2abcee2a335 took: 11.923865ms May 21 00:19:42.584: INFO: Terminating ReplicationController wrapped-volume-race-985e0f23-b6bd-4621-b090-d2abcee2a335 pods took: 400.22852ms STEP: Creating RC which spawns configmap-volume pods May 21 00:19:55.436: INFO: Pod name wrapped-volume-race-d4d1830e-1d32-45b8-b450-c7f3c5172fde: Found 0 pods out of 5 May 21 00:20:00.446: INFO: Pod name wrapped-volume-race-d4d1830e-1d32-45b8-b450-c7f3c5172fde: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d4d1830e-1d32-45b8-b450-c7f3c5172fde in namespace emptydir-wrapper-9335, will wait for the garbage collector to delete the pods May 21 00:20:14.537: INFO: Deleting ReplicationController wrapped-volume-race-d4d1830e-1d32-45b8-b450-c7f3c5172fde took: 14.166994ms May 21 00:20:14.937: INFO: Terminating ReplicationController wrapped-volume-race-d4d1830e-1d32-45b8-b450-c7f3c5172fde pods took: 400.244222ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:20:25.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9335" for this suite. • [SLOW TEST:95.510 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":127,"skipped":2154,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:20:25.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-298c5afe-d7d2-440c-8247-b8823895d864 STEP: Creating a pod to test consume configMaps May 21 00:20:25.958: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a" in namespace "projected-8972" to be "Succeeded or Failed" May 21 00:20:25.969: INFO: Pod "pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.222573ms May 21 00:20:27.976: INFO: Pod "pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018259438s May 21 00:20:29.980: INFO: Pod "pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a": Phase="Running", Reason="", readiness=true. Elapsed: 4.022091911s May 21 00:20:32.000: INFO: Pod "pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042143442s STEP: Saw pod success May 21 00:20:32.000: INFO: Pod "pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a" satisfied condition "Succeeded or Failed" May 21 00:20:32.022: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a container projected-configmap-volume-test: STEP: delete the pod May 21 00:20:32.127: INFO: Waiting for pod pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a to disappear May 21 00:20:32.129: INFO: Pod pod-projected-configmaps-bb1926e8-2379-4cf7-bdbf-e46511c0913a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:20:32.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8972" for this suite. • [SLOW TEST:6.331 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":128,"skipped":2167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:20:32.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 21 00:20:32.276: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 21 00:20:32.688: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 21 00:20:35.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 00:20:37.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617232, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 00:20:39.845: INFO: Waited 626.268886ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:20:40.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-885" for this suite. • [SLOW TEST:8.445 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":129,"skipped":2235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:20:40.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9358 STEP: creating a selector STEP: Creating the service pods in kubernetes May 21 00:20:40.789: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 21 00:20:41.079: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 00:20:43.169: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 00:20:45.089: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:20:47.083: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:20:49.084: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:20:51.083: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:20:53.083: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:20:55.085: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:20:57.084: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:20:59.085: INFO: The status of Pod netserver-0 is Running (Ready = true) May 21 00:20:59.091: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 21 00:21:03.178: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.156:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9358 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:21:03.178: INFO: >>> kubeConfig: /root/.kube/config I0521 00:21:03.213076 8 log.go:172] (0xc001d12840) (0xc0025bdea0) Create stream I0521 00:21:03.213277 8 log.go:172] (0xc001d12840) (0xc0025bdea0) Stream added, broadcasting: 1 I0521 00:21:03.219353 8 log.go:172] (0xc001d12840) Reply frame received for 1 I0521 00:21:03.219399 8 log.go:172] (0xc001d12840) (0xc002b29a40) Create stream I0521 00:21:03.219415 8 log.go:172] (0xc001d12840) (0xc002b29a40) Stream added, broadcasting: 3 I0521 00:21:03.220622 8 log.go:172] (0xc001d12840) Reply frame received for 3 I0521 00:21:03.220683 8 log.go:172] (0xc001d12840) (0xc0024f0be0) Create stream I0521 00:21:03.220716 8 log.go:172] (0xc001d12840) (0xc0024f0be0) Stream added, broadcasting: 5 I0521 00:21:03.222175 8 log.go:172] (0xc001d12840) Reply frame received for 5 I0521 00:21:03.308628 8 log.go:172] (0xc001d12840) Data frame received for 3 I0521 00:21:03.308670 8 log.go:172] (0xc002b29a40) (3) Data frame handling I0521 00:21:03.308725 8 log.go:172] (0xc002b29a40) (3) Data frame sent I0521 00:21:03.308793 8 log.go:172] (0xc001d12840) Data frame received for 3 I0521 00:21:03.308862 8 log.go:172] (0xc002b29a40) (3) Data frame handling I0521 00:21:03.309377 8 log.go:172] (0xc001d12840) Data frame received for 5 I0521 00:21:03.309404 8 log.go:172] (0xc0024f0be0) (5) Data frame handling I0521 00:21:03.311117 8 log.go:172] (0xc001d12840) Data frame received for 1 I0521 00:21:03.311138 8 log.go:172] (0xc0025bdea0) (1) Data frame handling I0521 00:21:03.311152 8 log.go:172] (0xc0025bdea0) (1) Data frame sent I0521 00:21:03.311169 8 log.go:172] (0xc001d12840) (0xc0025bdea0) Stream removed, broadcasting: 1 I0521 00:21:03.311341 8 log.go:172] (0xc001d12840) (0xc0025bdea0) Stream removed, broadcasting: 1 I0521 00:21:03.311381 8 log.go:172] (0xc001d12840) (0xc002b29a40) Stream removed, broadcasting: 3 I0521 00:21:03.311429 8 log.go:172] (0xc001d12840) (0xc0024f0be0) Stream removed, broadcasting: 5 May 21 00:21:03.311: INFO: Found all expected endpoints: [netserver-0] I0521 00:21:03.311801 8 log.go:172] (0xc001d12840) Go away received May 21 00:21:03.315: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.163:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9358 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:21:03.315: INFO: >>> kubeConfig: /root/.kube/config I0521 00:21:03.345389 8 log.go:172] (0xc002371760) (0xc0024f1360) Create stream I0521 00:21:03.345424 8 log.go:172] (0xc002371760) (0xc0024f1360) Stream added, broadcasting: 1 I0521 00:21:03.347660 8 log.go:172] (0xc002371760) Reply frame received for 1 I0521 00:21:03.347693 8 log.go:172] (0xc002371760) (0xc001036280) Create stream I0521 00:21:03.347706 8 log.go:172] (0xc002371760) (0xc001036280) Stream added, broadcasting: 3 I0521 00:21:03.348541 8 log.go:172] (0xc002371760) Reply frame received for 3 I0521 00:21:03.348591 8 log.go:172] (0xc002371760) (0xc001036320) Create stream I0521 00:21:03.348618 8 log.go:172] (0xc002371760) (0xc001036320) Stream added, broadcasting: 5 I0521 00:21:03.349695 8 log.go:172] (0xc002371760) Reply frame received for 5 I0521 00:21:03.407450 8 log.go:172] (0xc002371760) Data frame received for 3 I0521 00:21:03.407475 8 log.go:172] (0xc001036280) (3) Data frame handling I0521 00:21:03.407494 8 log.go:172] (0xc001036280) (3) Data frame sent I0521 00:21:03.407504 8 log.go:172] (0xc002371760) Data frame received for 3 I0521 00:21:03.407512 8 log.go:172] (0xc001036280) (3) Data frame handling I0521 00:21:03.407689 8 log.go:172] (0xc002371760) Data frame received for 5 I0521 00:21:03.407712 8 log.go:172] (0xc001036320) (5) Data frame handling I0521 00:21:03.409371 8 log.go:172] (0xc002371760) Data frame received for 1 I0521 00:21:03.409388 8 log.go:172] (0xc0024f1360) (1) Data frame handling I0521 00:21:03.409400 8 log.go:172] (0xc0024f1360) (1) Data frame sent I0521 00:21:03.409624 8 log.go:172] (0xc002371760) (0xc0024f1360) Stream removed, broadcasting: 1 I0521 00:21:03.409710 8 log.go:172] (0xc002371760) (0xc0024f1360) Stream removed, broadcasting: 1 I0521 00:21:03.409726 8 log.go:172] (0xc002371760) (0xc001036280) Stream removed, broadcasting: 3 I0521 00:21:03.409857 8 log.go:172] (0xc002371760) Go away received I0521 00:21:03.409894 8 log.go:172] (0xc002371760) (0xc001036320) Stream removed, broadcasting: 5 May 21 00:21:03.409: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:03.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9358" for this suite. • [SLOW TEST:22.783 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":130,"skipped":2274,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:03.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:21:04.265: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:21:06.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617264, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617264, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617264, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617264, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:21:09.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:21:09.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3636-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:10.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8448" for this suite. STEP: Destroying namespace "webhook-8448-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.358 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":131,"skipped":2283,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:10.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 21 00:21:10.887: INFO: Waiting up to 5m0s for pod "var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d" in namespace "var-expansion-4428" to be "Succeeded or Failed" May 21 00:21:10.898: INFO: Pod "var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.435687ms May 21 00:21:12.902: INFO: Pod "var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01451415s May 21 00:21:14.906: INFO: Pod "var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018534077s STEP: Saw pod success May 21 00:21:14.906: INFO: Pod "var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d" satisfied condition "Succeeded or Failed" May 21 00:21:14.909: INFO: Trying to get logs from node latest-worker pod var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d container dapi-container: STEP: delete the pod May 21 00:21:14.935: INFO: Waiting for pod var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d to disappear May 21 00:21:14.939: INFO: Pod var-expansion-ddd57b6e-a1ee-492b-ac90-39d32dafd78d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:14.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4428" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":132,"skipped":2297,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:14.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:21:16.064: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:21:18.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617276, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617276, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617276, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617276, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:21:21.110: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:21.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3705" for this suite. STEP: Destroying namespace "webhook-3705-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.483 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":133,"skipped":2301,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:21.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:21.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8605" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":134,"skipped":2306,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:21.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-8310 STEP: creating service affinity-clusterip in namespace services-8310 STEP: creating replication controller affinity-clusterip in namespace services-8310 I0521 00:21:22.055187 8 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-8310, replica count: 3 I0521 00:21:25.105555 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:21:28.105777 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 00:21:28.112: INFO: Creating new exec pod May 21 00:21:33.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8310 execpod-affinityq4kxv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 21 00:21:33.380: INFO: stderr: "I0521 00:21:33.273503 2788 log.go:172] (0xc000944dc0) (0xc000be4460) Create stream\nI0521 00:21:33.273548 2788 log.go:172] (0xc000944dc0) (0xc000be4460) Stream added, broadcasting: 1\nI0521 00:21:33.278774 2788 log.go:172] (0xc000944dc0) Reply frame received for 1\nI0521 00:21:33.278814 2788 log.go:172] (0xc000944dc0) (0xc00069c500) Create stream\nI0521 00:21:33.278825 2788 log.go:172] (0xc000944dc0) (0xc00069c500) Stream added, broadcasting: 3\nI0521 00:21:33.279602 2788 log.go:172] (0xc000944dc0) Reply frame received for 3\nI0521 00:21:33.279637 2788 log.go:172] (0xc000944dc0) (0xc0005401e0) Create stream\nI0521 00:21:33.279655 2788 log.go:172] (0xc000944dc0) (0xc0005401e0) Stream added, broadcasting: 5\nI0521 00:21:33.280579 2788 log.go:172] (0xc000944dc0) Reply frame received for 5\nI0521 00:21:33.373747 2788 log.go:172] (0xc000944dc0) Data frame received for 5\nI0521 00:21:33.373785 2788 log.go:172] (0xc0005401e0) (5) Data frame handling\nI0521 00:21:33.373798 2788 log.go:172] (0xc0005401e0) (5) Data frame sent\nI0521 00:21:33.373805 2788 log.go:172] (0xc000944dc0) Data frame received for 5\nI0521 00:21:33.373812 2788 log.go:172] (0xc0005401e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0521 00:21:33.373843 2788 log.go:172] (0xc000944dc0) Data frame received for 3\nI0521 00:21:33.373858 2788 log.go:172] (0xc00069c500) (3) Data frame handling\nI0521 00:21:33.374922 2788 log.go:172] (0xc000944dc0) Data frame received for 1\nI0521 00:21:33.374939 2788 log.go:172] (0xc000be4460) (1) Data frame handling\nI0521 00:21:33.374950 2788 log.go:172] (0xc000be4460) (1) Data frame sent\nI0521 00:21:33.374961 2788 log.go:172] (0xc000944dc0) (0xc000be4460) Stream removed, broadcasting: 1\nI0521 00:21:33.375023 2788 log.go:172] (0xc000944dc0) Go away received\nI0521 00:21:33.375201 2788 log.go:172] (0xc000944dc0) (0xc000be4460) Stream removed, broadcasting: 1\nI0521 00:21:33.375212 2788 log.go:172] (0xc000944dc0) (0xc00069c500) Stream removed, broadcasting: 3\nI0521 00:21:33.375218 2788 log.go:172] (0xc000944dc0) (0xc0005401e0) Stream removed, broadcasting: 5\n" May 21 00:21:33.380: INFO: stdout: "" May 21 00:21:33.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8310 execpod-affinityq4kxv -- /bin/sh -x -c nc -zv -t -w 2 10.100.179.158 80' May 21 00:21:33.586: INFO: stderr: "I0521 00:21:33.519778 2808 log.go:172] (0xc000ae3e40) (0xc000ba8aa0) Create stream\nI0521 00:21:33.519830 2808 log.go:172] (0xc000ae3e40) (0xc000ba8aa0) Stream added, broadcasting: 1\nI0521 00:21:33.522359 2808 log.go:172] (0xc000ae3e40) Reply frame received for 1\nI0521 00:21:33.522404 2808 log.go:172] (0xc000ae3e40) (0xc000aec3c0) Create stream\nI0521 00:21:33.522422 2808 log.go:172] (0xc000ae3e40) (0xc000aec3c0) Stream added, broadcasting: 3\nI0521 00:21:33.523409 2808 log.go:172] (0xc000ae3e40) Reply frame received for 3\nI0521 00:21:33.523435 2808 log.go:172] (0xc000ae3e40) (0xc000a84820) Create stream\nI0521 00:21:33.523443 2808 log.go:172] (0xc000ae3e40) (0xc000a84820) Stream added, broadcasting: 5\nI0521 00:21:33.524423 2808 log.go:172] (0xc000ae3e40) Reply frame received for 5\nI0521 00:21:33.580586 2808 log.go:172] (0xc000ae3e40) Data frame received for 3\nI0521 00:21:33.580627 2808 log.go:172] (0xc000aec3c0) (3) Data frame handling\nI0521 00:21:33.580657 2808 log.go:172] (0xc000ae3e40) Data frame received for 5\nI0521 00:21:33.580670 2808 log.go:172] (0xc000a84820) (5) Data frame handling\nI0521 00:21:33.580685 2808 log.go:172] (0xc000a84820) (5) Data frame sent\nI0521 00:21:33.580707 2808 log.go:172] (0xc000ae3e40) Data frame received for 5\nI0521 00:21:33.580725 2808 log.go:172] (0xc000a84820) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.179.158 80\nConnection to 10.100.179.158 80 port [tcp/http] succeeded!\nI0521 00:21:33.582273 2808 log.go:172] (0xc000ae3e40) Data frame received for 1\nI0521 00:21:33.582305 2808 log.go:172] (0xc000ba8aa0) (1) Data frame handling\nI0521 00:21:33.582329 2808 log.go:172] (0xc000ba8aa0) (1) Data frame sent\nI0521 00:21:33.582343 2808 log.go:172] (0xc000ae3e40) (0xc000ba8aa0) Stream removed, broadcasting: 1\nI0521 00:21:33.582412 2808 log.go:172] (0xc000ae3e40) Go away received\nI0521 00:21:33.582742 2808 log.go:172] (0xc000ae3e40) (0xc000ba8aa0) Stream removed, broadcasting: 1\nI0521 00:21:33.582760 2808 log.go:172] (0xc000ae3e40) (0xc000aec3c0) Stream removed, broadcasting: 3\nI0521 00:21:33.582768 2808 log.go:172] (0xc000ae3e40) (0xc000a84820) Stream removed, broadcasting: 5\n" May 21 00:21:33.586: INFO: stdout: "" May 21 00:21:33.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-8310 execpod-affinityq4kxv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.100.179.158:80/ ; done' May 21 00:21:33.877: INFO: stderr: "I0521 00:21:33.703659 2828 log.go:172] (0xc000b16b00) (0xc0003d00a0) Create stream\nI0521 00:21:33.703701 2828 log.go:172] (0xc000b16b00) (0xc0003d00a0) Stream added, broadcasting: 1\nI0521 00:21:33.705898 2828 log.go:172] (0xc000b16b00) Reply frame received for 1\nI0521 00:21:33.705934 2828 log.go:172] (0xc000b16b00) (0xc0002386e0) Create stream\nI0521 00:21:33.705947 2828 log.go:172] (0xc000b16b00) (0xc0002386e0) Stream added, broadcasting: 3\nI0521 00:21:33.706629 2828 log.go:172] (0xc000b16b00) Reply frame received for 3\nI0521 00:21:33.706664 2828 log.go:172] (0xc000b16b00) (0xc0003483c0) Create stream\nI0521 00:21:33.706673 2828 log.go:172] (0xc000b16b00) (0xc0003483c0) Stream added, broadcasting: 5\nI0521 00:21:33.707377 2828 log.go:172] (0xc000b16b00) Reply frame received for 5\nI0521 00:21:33.754118 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.754138 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.754179 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.754217 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.754228 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.754243 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.787660 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.787691 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.787717 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.788533 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.788558 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.788580 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.788601 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.788624 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.788637 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.796632 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.796669 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.796708 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.797689 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.797729 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.797753 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\nI0521 00:21:33.797779 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.797798 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.797822 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.797842 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\n+ echo\nI0521 00:21:33.797874 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.797900 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\nI0521 00:21:33.801422 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.801442 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.801459 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.802348 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.802373 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.802385 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.802398 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.802405 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.802413 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.805936 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.805956 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.805974 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.806235 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.806249 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.806255 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.806275 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.806293 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.806311 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.812710 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.812729 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.812751 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.813288 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.813321 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.813344 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.813363 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.813383 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.813398 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.820009 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.820035 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.820051 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.820987 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.821008 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.821016 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.821034 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.821062 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.821084 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.825981 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.826008 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.826025 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -sI0521 00:21:33.826050 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.826062 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.826088 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.826119 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.826143 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.826180 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.826222 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.826241 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.826256 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.830266 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.830284 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.830300 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.831261 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.831283 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.831307 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.831411 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.831429 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.831446 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.837402 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.837427 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.837452 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.838430 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.838446 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.838455 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.838467 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.838474 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.838481 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.842081 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.842106 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.842125 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.843046 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.843065 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.843079 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.843094 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.843103 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.843130 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.847465 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.847482 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.847500 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.848206 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.848226 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.848236 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.848256 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.848276 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.848300 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\nI0521 00:21:33.848312 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.848323 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.848340 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\nI0521 00:21:33.852365 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.852388 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.852411 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.852823 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.852844 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.852854 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.852868 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.852876 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.852885 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.856606 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.856628 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.856643 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.856995 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.857017 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.857051 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.857065 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.857080 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.857092 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.860975 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.860988 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.860999 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.861854 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.861885 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.861901 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.861916 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.861925 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.861933 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.866036 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.866071 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.866111 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.866366 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.866382 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.866398 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.866418 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.100.179.158:80/\nI0521 00:21:33.866444 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.866478 2828 log.go:172] (0xc0003483c0) (5) Data frame sent\nI0521 00:21:33.870025 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.870042 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.870061 2828 log.go:172] (0xc0002386e0) (3) Data frame sent\nI0521 00:21:33.870627 2828 log.go:172] (0xc000b16b00) Data frame received for 3\nI0521 00:21:33.870659 2828 log.go:172] (0xc0002386e0) (3) Data frame handling\nI0521 00:21:33.871048 2828 log.go:172] (0xc000b16b00) Data frame received for 5\nI0521 00:21:33.871060 2828 log.go:172] (0xc0003483c0) (5) Data frame handling\nI0521 00:21:33.872690 2828 log.go:172] (0xc000b16b00) Data frame received for 1\nI0521 00:21:33.872719 2828 log.go:172] (0xc0003d00a0) (1) Data frame handling\nI0521 00:21:33.872734 2828 log.go:172] (0xc0003d00a0) (1) Data frame sent\nI0521 00:21:33.872748 2828 log.go:172] (0xc000b16b00) (0xc0003d00a0) Stream removed, broadcasting: 1\nI0521 00:21:33.872792 2828 log.go:172] (0xc000b16b00) Go away received\nI0521 00:21:33.873474 2828 log.go:172] (0xc000b16b00) (0xc0003d00a0) Stream removed, broadcasting: 1\nI0521 00:21:33.873498 2828 log.go:172] (0xc000b16b00) (0xc0002386e0) Stream removed, broadcasting: 3\nI0521 00:21:33.873509 2828 log.go:172] (0xc000b16b00) (0xc0003483c0) Stream removed, broadcasting: 5\n" May 21 00:21:33.878: INFO: stdout: "\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl\naffinity-clusterip-hlvfl" May 21 00:21:33.878: INFO: Received response from host: May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Received response from host: affinity-clusterip-hlvfl May 21 00:21:33.878: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-8310, will wait for the garbage collector to delete the pods May 21 00:21:33.958: INFO: Deleting ReplicationController affinity-clusterip took: 6.696935ms May 21 00:21:34.358: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.252983ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:45.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8310" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.476 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":135,"skipped":2309,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:45.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:49.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3520" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":136,"skipped":2323,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:49.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:21:49.581: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:53.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8554" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":137,"skipped":2327,"failed":0} S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:53.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-92149d5b-784d-49a9-ab04-b5c425e5df69 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:21:59.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9057" for this suite. • [SLOW TEST:6.184 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2328,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:21:59.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9819 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9819 STEP: creating replication controller externalsvc in namespace services-9819 I0521 00:22:00.143179 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9819, replica count: 2 I0521 00:22:03.193593 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:22:06.193814 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 21 00:22:06.265: INFO: Creating new exec pod May 21 00:22:10.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9819 execpod7pjdd -- /bin/sh -x -c nslookup clusterip-service' May 21 00:22:13.361: INFO: stderr: "I0521 00:22:13.259607 2848 log.go:172] (0xc00003a420) (0xc000814a00) Create stream\nI0521 00:22:13.259650 2848 log.go:172] (0xc00003a420) (0xc000814a00) Stream added, broadcasting: 1\nI0521 00:22:13.263028 2848 log.go:172] (0xc00003a420) Reply frame received for 1\nI0521 00:22:13.263083 2848 log.go:172] (0xc00003a420) (0xc00080cc80) Create stream\nI0521 00:22:13.263098 2848 log.go:172] (0xc00003a420) (0xc00080cc80) Stream added, broadcasting: 3\nI0521 00:22:13.264020 2848 log.go:172] (0xc00003a420) Reply frame received for 3\nI0521 00:22:13.264056 2848 log.go:172] (0xc00003a420) (0xc00080dc20) Create stream\nI0521 00:22:13.264068 2848 log.go:172] (0xc00003a420) (0xc00080dc20) Stream added, broadcasting: 5\nI0521 00:22:13.264995 2848 log.go:172] (0xc00003a420) Reply frame received for 5\nI0521 00:22:13.344724 2848 log.go:172] (0xc00003a420) Data frame received for 5\nI0521 00:22:13.344748 2848 log.go:172] (0xc00080dc20) (5) Data frame handling\nI0521 00:22:13.344765 2848 log.go:172] (0xc00080dc20) (5) Data frame sent\n+ nslookup clusterip-service\nI0521 00:22:13.352542 2848 log.go:172] (0xc00003a420) Data frame received for 3\nI0521 00:22:13.352564 2848 log.go:172] (0xc00080cc80) (3) Data frame handling\nI0521 00:22:13.352583 2848 log.go:172] (0xc00080cc80) (3) Data frame sent\nI0521 00:22:13.353482 2848 log.go:172] (0xc00003a420) Data frame received for 3\nI0521 00:22:13.353495 2848 log.go:172] (0xc00080cc80) (3) Data frame handling\nI0521 00:22:13.353507 2848 log.go:172] (0xc00080cc80) (3) Data frame sent\nI0521 00:22:13.353961 2848 log.go:172] (0xc00003a420) Data frame received for 5\nI0521 00:22:13.353981 2848 log.go:172] (0xc00080dc20) (5) Data frame handling\nI0521 00:22:13.354250 2848 log.go:172] (0xc00003a420) Data frame received for 3\nI0521 00:22:13.354264 2848 log.go:172] (0xc00080cc80) (3) Data frame handling\nI0521 00:22:13.355888 2848 log.go:172] (0xc00003a420) Data frame received for 1\nI0521 00:22:13.355905 2848 log.go:172] (0xc000814a00) (1) Data frame handling\nI0521 00:22:13.355914 2848 log.go:172] (0xc000814a00) (1) Data frame sent\nI0521 00:22:13.355922 2848 log.go:172] (0xc00003a420) (0xc000814a00) Stream removed, broadcasting: 1\nI0521 00:22:13.356105 2848 log.go:172] (0xc00003a420) Go away received\nI0521 00:22:13.356261 2848 log.go:172] (0xc00003a420) (0xc000814a00) Stream removed, broadcasting: 1\nI0521 00:22:13.356278 2848 log.go:172] (0xc00003a420) (0xc00080cc80) Stream removed, broadcasting: 3\nI0521 00:22:13.356288 2848 log.go:172] (0xc00003a420) (0xc00080dc20) Stream removed, broadcasting: 5\n" May 21 00:22:13.361: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9819.svc.cluster.local\tcanonical name = externalsvc.services-9819.svc.cluster.local.\nName:\texternalsvc.services-9819.svc.cluster.local\nAddress: 10.105.194.52\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9819, will wait for the garbage collector to delete the pods May 21 00:22:13.421: INFO: Deleting ReplicationController externalsvc took: 6.57018ms May 21 00:22:13.721: INFO: Terminating ReplicationController externalsvc pods took: 300.251463ms May 21 00:22:25.524: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:22:25.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9819" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.645 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":139,"skipped":2333,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:22:25.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 21 00:22:25.658: INFO: Waiting up to 5m0s for pod "var-expansion-32db640a-6214-4066-9629-72ab7ab6f572" in namespace "var-expansion-5933" to be "Succeeded or Failed" May 21 00:22:25.662: INFO: Pod "var-expansion-32db640a-6214-4066-9629-72ab7ab6f572": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936792ms May 21 00:22:27.739: INFO: Pod "var-expansion-32db640a-6214-4066-9629-72ab7ab6f572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080480809s May 21 00:22:29.742: INFO: Pod "var-expansion-32db640a-6214-4066-9629-72ab7ab6f572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083272422s STEP: Saw pod success May 21 00:22:29.742: INFO: Pod "var-expansion-32db640a-6214-4066-9629-72ab7ab6f572" satisfied condition "Succeeded or Failed" May 21 00:22:29.744: INFO: Trying to get logs from node latest-worker2 pod var-expansion-32db640a-6214-4066-9629-72ab7ab6f572 container dapi-container: STEP: delete the pod May 21 00:22:30.005: INFO: Waiting for pod var-expansion-32db640a-6214-4066-9629-72ab7ab6f572 to disappear May 21 00:22:30.011: INFO: Pod var-expansion-32db640a-6214-4066-9629-72ab7ab6f572 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:22:30.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5933" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":140,"skipped":2345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:22:30.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0 STEP: updating the pod May 21 00:22:38.675: INFO: Successfully updated pod "var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0" STEP: waiting for pod and container restart STEP: Failing liveness probe May 21 00:22:38.692: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-1002 PodName:var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:22:38.692: INFO: >>> kubeConfig: /root/.kube/config I0521 00:22:38.736769 8 log.go:172] (0xc001d12f20) (0xc00212d9a0) Create stream I0521 00:22:38.736808 8 log.go:172] (0xc001d12f20) (0xc00212d9a0) Stream added, broadcasting: 1 I0521 00:22:38.738918 8 log.go:172] (0xc001d12f20) Reply frame received for 1 I0521 00:22:38.738986 8 log.go:172] (0xc001d12f20) (0xc00130c3c0) Create stream I0521 00:22:38.739004 8 log.go:172] (0xc001d12f20) (0xc00130c3c0) Stream added, broadcasting: 3 I0521 00:22:38.739959 8 log.go:172] (0xc001d12f20) Reply frame received for 3 I0521 00:22:38.740001 8 log.go:172] (0xc001d12f20) (0xc00212dcc0) Create stream I0521 00:22:38.740020 8 log.go:172] (0xc001d12f20) (0xc00212dcc0) Stream added, broadcasting: 5 I0521 00:22:38.741034 8 log.go:172] (0xc001d12f20) Reply frame received for 5 I0521 00:22:38.830644 8 log.go:172] (0xc001d12f20) Data frame received for 5 I0521 00:22:38.830674 8 log.go:172] (0xc00212dcc0) (5) Data frame handling I0521 00:22:38.830702 8 log.go:172] (0xc001d12f20) Data frame received for 3 I0521 00:22:38.830711 8 log.go:172] (0xc00130c3c0) (3) Data frame handling I0521 00:22:38.832120 8 log.go:172] (0xc001d12f20) Data frame received for 1 I0521 00:22:38.832136 8 log.go:172] (0xc00212d9a0) (1) Data frame handling I0521 00:22:38.832149 8 log.go:172] (0xc00212d9a0) (1) Data frame sent I0521 00:22:38.832166 8 log.go:172] (0xc001d12f20) (0xc00212d9a0) Stream removed, broadcasting: 1 I0521 00:22:38.832261 8 log.go:172] (0xc001d12f20) (0xc00212d9a0) Stream removed, broadcasting: 1 I0521 00:22:38.832279 8 log.go:172] (0xc001d12f20) (0xc00130c3c0) Stream removed, broadcasting: 3 I0521 00:22:38.832369 8 log.go:172] (0xc001d12f20) Go away received I0521 00:22:38.832438 8 log.go:172] (0xc001d12f20) (0xc00212dcc0) Stream removed, broadcasting: 5 May 21 00:22:38.832: INFO: Pod exec output: / STEP: Waiting for container to restart May 21 00:22:38.836: INFO: Container dapi-container, restarts: 0 May 21 00:22:48.841: INFO: Container dapi-container, restarts: 0 May 21 00:22:58.841: INFO: Container dapi-container, restarts: 0 May 21 00:23:08.840: INFO: Container dapi-container, restarts: 0 May 21 00:23:18.840: INFO: Container dapi-container, restarts: 1 May 21 00:23:18.840: INFO: Container has restart count: 1 STEP: Rewriting the file May 21 00:23:18.840: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-1002 PodName:var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:23:18.840: INFO: >>> kubeConfig: /root/.kube/config I0521 00:23:18.875566 8 log.go:172] (0xc002caa160) (0xc0011b28c0) Create stream I0521 00:23:18.875613 8 log.go:172] (0xc002caa160) (0xc0011b28c0) Stream added, broadcasting: 1 I0521 00:23:18.877555 8 log.go:172] (0xc002caa160) Reply frame received for 1 I0521 00:23:18.877612 8 log.go:172] (0xc002caa160) (0xc0010c6000) Create stream I0521 00:23:18.877624 8 log.go:172] (0xc002caa160) (0xc0010c6000) Stream added, broadcasting: 3 I0521 00:23:18.878404 8 log.go:172] (0xc002caa160) Reply frame received for 3 I0521 00:23:18.878432 8 log.go:172] (0xc002caa160) (0xc0010c60a0) Create stream I0521 00:23:18.878441 8 log.go:172] (0xc002caa160) (0xc0010c60a0) Stream added, broadcasting: 5 I0521 00:23:18.879151 8 log.go:172] (0xc002caa160) Reply frame received for 5 I0521 00:23:18.969855 8 log.go:172] (0xc002caa160) Data frame received for 5 I0521 00:23:18.969882 8 log.go:172] (0xc0010c60a0) (5) Data frame handling I0521 00:23:18.970198 8 log.go:172] (0xc002caa160) Data frame received for 3 I0521 00:23:18.970223 8 log.go:172] (0xc0010c6000) (3) Data frame handling I0521 00:23:18.971333 8 log.go:172] (0xc002caa160) Data frame received for 1 I0521 00:23:18.971352 8 log.go:172] (0xc0011b28c0) (1) Data frame handling I0521 00:23:18.971368 8 log.go:172] (0xc0011b28c0) (1) Data frame sent I0521 00:23:18.971724 8 log.go:172] (0xc002caa160) (0xc0011b28c0) Stream removed, broadcasting: 1 I0521 00:23:18.971772 8 log.go:172] (0xc002caa160) Go away received I0521 00:23:18.971861 8 log.go:172] (0xc002caa160) (0xc0011b28c0) Stream removed, broadcasting: 1 I0521 00:23:18.971885 8 log.go:172] (0xc002caa160) (0xc0010c6000) Stream removed, broadcasting: 3 I0521 00:23:18.971904 8 log.go:172] (0xc002caa160) (0xc0010c60a0) Stream removed, broadcasting: 5 May 21 00:23:18.971: INFO: Exec stderr: "" May 21 00:23:18.971: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 21 00:23:46.980: INFO: Container has restart count: 2 May 21 00:24:48.980: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 21 00:24:48.983: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-1002 PodName:var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:24:48.983: INFO: >>> kubeConfig: /root/.kube/config I0521 00:24:49.004952 8 log.go:172] (0xc0025b0160) (0xc0015f1cc0) Create stream I0521 00:24:49.004974 8 log.go:172] (0xc0025b0160) (0xc0015f1cc0) Stream added, broadcasting: 1 I0521 00:24:49.006502 8 log.go:172] (0xc0025b0160) Reply frame received for 1 I0521 00:24:49.006536 8 log.go:172] (0xc0025b0160) (0xc0015f1d60) Create stream I0521 00:24:49.006553 8 log.go:172] (0xc0025b0160) (0xc0015f1d60) Stream added, broadcasting: 3 I0521 00:24:49.007905 8 log.go:172] (0xc0025b0160) Reply frame received for 3 I0521 00:24:49.007943 8 log.go:172] (0xc0025b0160) (0xc0010c6820) Create stream I0521 00:24:49.007957 8 log.go:172] (0xc0025b0160) (0xc0010c6820) Stream added, broadcasting: 5 I0521 00:24:49.008751 8 log.go:172] (0xc0025b0160) Reply frame received for 5 I0521 00:24:49.105936 8 log.go:172] (0xc0025b0160) Data frame received for 5 I0521 00:24:49.105974 8 log.go:172] (0xc0010c6820) (5) Data frame handling I0521 00:24:49.106026 8 log.go:172] (0xc0025b0160) Data frame received for 3 I0521 00:24:49.106056 8 log.go:172] (0xc0015f1d60) (3) Data frame handling I0521 00:24:49.107195 8 log.go:172] (0xc0025b0160) Data frame received for 1 I0521 00:24:49.107232 8 log.go:172] (0xc0015f1cc0) (1) Data frame handling I0521 00:24:49.107272 8 log.go:172] (0xc0015f1cc0) (1) Data frame sent I0521 00:24:49.107296 8 log.go:172] (0xc0025b0160) (0xc0015f1cc0) Stream removed, broadcasting: 1 I0521 00:24:49.107337 8 log.go:172] (0xc0025b0160) Go away received I0521 00:24:49.107520 8 log.go:172] (0xc0025b0160) (0xc0015f1cc0) Stream removed, broadcasting: 1 I0521 00:24:49.107559 8 log.go:172] (0xc0025b0160) (0xc0015f1d60) Stream removed, broadcasting: 3 I0521 00:24:49.107584 8 log.go:172] (0xc0025b0160) (0xc0010c6820) Stream removed, broadcasting: 5 May 21 00:24:49.111: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-1002 PodName:var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:24:49.111: INFO: >>> kubeConfig: /root/.kube/config I0521 00:24:49.140743 8 log.go:172] (0xc002c8cdc0) (0xc0025bdd60) Create stream I0521 00:24:49.140799 8 log.go:172] (0xc002c8cdc0) (0xc0025bdd60) Stream added, broadcasting: 1 I0521 00:24:49.143204 8 log.go:172] (0xc002c8cdc0) Reply frame received for 1 I0521 00:24:49.143237 8 log.go:172] (0xc002c8cdc0) (0xc0010c6960) Create stream I0521 00:24:49.143246 8 log.go:172] (0xc002c8cdc0) (0xc0010c6960) Stream added, broadcasting: 3 I0521 00:24:49.144120 8 log.go:172] (0xc002c8cdc0) Reply frame received for 3 I0521 00:24:49.144153 8 log.go:172] (0xc002c8cdc0) (0xc0015f1e00) Create stream I0521 00:24:49.144162 8 log.go:172] (0xc002c8cdc0) (0xc0015f1e00) Stream added, broadcasting: 5 I0521 00:24:49.144904 8 log.go:172] (0xc002c8cdc0) Reply frame received for 5 I0521 00:24:49.208310 8 log.go:172] (0xc002c8cdc0) Data frame received for 5 I0521 00:24:49.208346 8 log.go:172] (0xc0015f1e00) (5) Data frame handling I0521 00:24:49.208431 8 log.go:172] (0xc002c8cdc0) Data frame received for 3 I0521 00:24:49.208494 8 log.go:172] (0xc0010c6960) (3) Data frame handling I0521 00:24:49.210379 8 log.go:172] (0xc002c8cdc0) Data frame received for 1 I0521 00:24:49.210412 8 log.go:172] (0xc0025bdd60) (1) Data frame handling I0521 00:24:49.210436 8 log.go:172] (0xc0025bdd60) (1) Data frame sent I0521 00:24:49.210468 8 log.go:172] (0xc002c8cdc0) (0xc0025bdd60) Stream removed, broadcasting: 1 I0521 00:24:49.210528 8 log.go:172] (0xc002c8cdc0) Go away received I0521 00:24:49.210639 8 log.go:172] (0xc002c8cdc0) (0xc0025bdd60) Stream removed, broadcasting: 1 I0521 00:24:49.210697 8 log.go:172] (0xc002c8cdc0) (0xc0010c6960) Stream removed, broadcasting: 3 I0521 00:24:49.210740 8 log.go:172] (0xc002c8cdc0) (0xc0015f1e00) Stream removed, broadcasting: 5 May 21 00:24:49.210: INFO: Deleting pod "var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0" in namespace "var-expansion-1002" May 21 00:24:49.220: INFO: Wait up to 5m0s for pod "var-expansion-cd442d95-862c-475c-a7b6-1bc3de5a6fa0" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:25:25.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1002" for this suite. • [SLOW TEST:175.215 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":141,"skipped":2412,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:25:25.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 21 00:25:25.877: INFO: created pod pod-service-account-defaultsa May 21 00:25:25.877: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 21 00:25:25.901: INFO: created pod pod-service-account-mountsa May 21 00:25:25.901: INFO: pod pod-service-account-mountsa service account token volume mount: true May 21 00:25:25.917: INFO: created pod pod-service-account-nomountsa May 21 00:25:25.917: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 21 00:25:25.952: INFO: created pod pod-service-account-defaultsa-mountspec May 21 00:25:25.952: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 21 00:25:25.998: INFO: created pod pod-service-account-mountsa-mountspec May 21 00:25:25.998: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 21 00:25:26.063: INFO: created pod pod-service-account-nomountsa-mountspec May 21 00:25:26.063: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 21 00:25:26.078: INFO: created pod pod-service-account-defaultsa-nomountspec May 21 00:25:26.078: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 21 00:25:26.113: INFO: created pod pod-service-account-mountsa-nomountspec May 21 00:25:26.113: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 21 00:25:26.145: INFO: created pod pod-service-account-nomountsa-nomountspec May 21 00:25:26.145: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:25:26.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4375" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":142,"skipped":2420,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:25:26.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:26:07.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6734" for this suite. • [SLOW TEST:41.664 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":143,"skipped":2427,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:26:07.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-6142 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6142 STEP: Deleting pre-stop pod May 21 00:26:21.064: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:26:21.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6142" for this suite. • [SLOW TEST:13.208 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":144,"skipped":2436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:26:21.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 00:26:21.499: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 00:26:21.509: INFO: Waiting for terminating namespaces to be deleted... May 21 00:26:21.511: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 21 00:26:21.515: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 21 00:26:21.515: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 21 00:26:21.515: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 21 00:26:21.515: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 21 00:26:21.515: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:26:21.515: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:26:21.515: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:26:21.515: INFO: Container kube-proxy ready: true, restart count 0 May 21 00:26:21.515: INFO: tester from prestop-6142 started at 2020-05-21 00:26:12 +0000 UTC (1 container statuses recorded) May 21 00:26:21.515: INFO: Container tester ready: true, restart count 0 May 21 00:26:21.515: INFO: pod-service-account-mountsa-mountspec from svcaccounts-4375 started at 2020-05-21 00:25:26 +0000 UTC (1 container statuses recorded) May 21 00:26:21.515: INFO: Container token-test ready: true, restart count 0 May 21 00:26:21.515: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-4375 started at 2020-05-21 00:25:26 +0000 UTC (1 container statuses recorded) May 21 00:26:21.515: INFO: Container token-test ready: true, restart count 0 May 21 00:26:21.515: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 21 00:26:21.520: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 21 00:26:21.520: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 21 00:26:21.520: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 21 00:26:21.520: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 21 00:26:21.520: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:26:21.520: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:26:21.520: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:26:21.520: INFO: Container kube-proxy ready: true, restart count 0 May 21 00:26:21.520: INFO: server from prestop-6142 started at 2020-05-21 00:26:08 +0000 UTC (1 container statuses recorded) May 21 00:26:21.520: INFO: Container server ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-11ba4c10-b5ee-43ea-9bfa-81d8174fccf0 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-11ba4c10-b5ee-43ea-9bfa-81d8174fccf0 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-11ba4c10-b5ee-43ea-9bfa-81d8174fccf0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:26:37.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1564" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.688 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":145,"skipped":2473,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:26:37.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:26:37.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596" in namespace "downward-api-8743" to be "Succeeded or Failed" May 21 00:26:37.922: INFO: Pod "downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596": Phase="Pending", Reason="", readiness=false. Elapsed: 12.931735ms May 21 00:26:39.926: INFO: Pod "downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017100174s May 21 00:26:41.931: INFO: Pod "downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021856867s STEP: Saw pod success May 21 00:26:41.931: INFO: Pod "downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596" satisfied condition "Succeeded or Failed" May 21 00:26:41.934: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596 container client-container: STEP: delete the pod May 21 00:26:42.004: INFO: Waiting for pod downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596 to disappear May 21 00:26:42.051: INFO: Pod downwardapi-volume-12154346-5411-46dd-9eb3-b01ae443d596 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:26:42.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8743" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":146,"skipped":2491,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:26:42.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:26:42.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd" in namespace "projected-4659" to be "Succeeded or Failed" May 21 00:26:42.276: INFO: Pod "downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.567894ms May 21 00:26:44.279: INFO: Pod "downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02438066s May 21 00:26:46.282: INFO: Pod "downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027305124s May 21 00:26:48.284: INFO: Pod "downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029484489s STEP: Saw pod success May 21 00:26:48.284: INFO: Pod "downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd" satisfied condition "Succeeded or Failed" May 21 00:26:48.286: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd container client-container: STEP: delete the pod May 21 00:26:48.438: INFO: Waiting for pod downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd to disappear May 21 00:26:48.446: INFO: Pod downwardapi-volume-e33a33e9-3442-4775-9b36-e4b13f6a53fd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:26:48.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4659" for this suite. • [SLOW TEST:6.339 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2494,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:26:48.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:26:48.530: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801" in namespace "downward-api-3364" to be "Succeeded or Failed" May 21 00:26:48.562: INFO: Pod "downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801": Phase="Pending", Reason="", readiness=false. Elapsed: 32.12325ms May 21 00:26:50.567: INFO: Pod "downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036888878s May 21 00:26:52.571: INFO: Pod "downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041491543s STEP: Saw pod success May 21 00:26:52.572: INFO: Pod "downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801" satisfied condition "Succeeded or Failed" May 21 00:26:52.574: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801 container client-container: STEP: delete the pod May 21 00:26:52.597: INFO: Waiting for pod downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801 to disappear May 21 00:26:52.602: INFO: Pod downwardapi-volume-e1e44c84-6008-4594-b615-9db1b55c9801 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:26:52.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3364" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:26:52.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1378 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1378 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1378 May 21 00:26:52.971: INFO: Found 0 stateful pods, waiting for 1 May 21 00:27:02.976: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 21 00:27:02.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:27:03.298: INFO: stderr: "I0521 00:27:03.112852 2883 log.go:172] (0xc000a4e9a0) (0xc00035cc80) Create stream\nI0521 00:27:03.112914 2883 log.go:172] (0xc000a4e9a0) (0xc00035cc80) Stream added, broadcasting: 1\nI0521 00:27:03.116015 2883 log.go:172] (0xc000a4e9a0) Reply frame received for 1\nI0521 00:27:03.116091 2883 log.go:172] (0xc000a4e9a0) (0xc00023a320) Create stream\nI0521 00:27:03.116124 2883 log.go:172] (0xc000a4e9a0) (0xc00023a320) Stream added, broadcasting: 3\nI0521 00:27:03.117494 2883 log.go:172] (0xc000a4e9a0) Reply frame received for 3\nI0521 00:27:03.117533 2883 log.go:172] (0xc000a4e9a0) (0xc00035d2c0) Create stream\nI0521 00:27:03.117544 2883 log.go:172] (0xc000a4e9a0) (0xc00035d2c0) Stream added, broadcasting: 5\nI0521 00:27:03.118385 2883 log.go:172] (0xc000a4e9a0) Reply frame received for 5\nI0521 00:27:03.209867 2883 log.go:172] (0xc000a4e9a0) Data frame received for 5\nI0521 00:27:03.209893 2883 log.go:172] (0xc00035d2c0) (5) Data frame handling\nI0521 00:27:03.209909 2883 log.go:172] (0xc00035d2c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:27:03.288268 2883 log.go:172] (0xc000a4e9a0) Data frame received for 3\nI0521 00:27:03.288437 2883 log.go:172] (0xc00023a320) (3) Data frame handling\nI0521 00:27:03.288488 2883 log.go:172] (0xc00023a320) (3) Data frame sent\nI0521 00:27:03.288559 2883 log.go:172] (0xc000a4e9a0) Data frame received for 3\nI0521 00:27:03.288577 2883 log.go:172] (0xc00023a320) (3) Data frame handling\nI0521 00:27:03.288595 2883 log.go:172] (0xc000a4e9a0) Data frame received for 5\nI0521 00:27:03.288606 2883 log.go:172] (0xc00035d2c0) (5) Data frame handling\nI0521 00:27:03.291970 2883 log.go:172] (0xc000a4e9a0) Data frame received for 1\nI0521 00:27:03.291999 2883 log.go:172] (0xc00035cc80) (1) Data frame handling\nI0521 00:27:03.292020 2883 log.go:172] (0xc00035cc80) (1) Data frame sent\nI0521 00:27:03.292063 2883 log.go:172] (0xc000a4e9a0) (0xc00035cc80) Stream removed, broadcasting: 1\nI0521 00:27:03.292108 2883 log.go:172] (0xc000a4e9a0) Go away received\nI0521 00:27:03.292684 2883 log.go:172] (0xc000a4e9a0) (0xc00035cc80) Stream removed, broadcasting: 1\nI0521 00:27:03.292721 2883 log.go:172] (0xc000a4e9a0) (0xc00023a320) Stream removed, broadcasting: 3\nI0521 00:27:03.292754 2883 log.go:172] (0xc000a4e9a0) (0xc00035d2c0) Stream removed, broadcasting: 5\n" May 21 00:27:03.298: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:27:03.298: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:27:03.302: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 21 00:27:13.323: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 00:27:13.323: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:27:13.339: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999599s May 21 00:27:14.345: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994116837s May 21 00:27:15.369: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987720322s May 21 00:27:16.374: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963775434s May 21 00:27:17.378: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.959086588s May 21 00:27:18.381: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.955102164s May 21 00:27:19.405: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.95145652s May 21 00:27:20.441: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.927731151s May 21 00:27:21.445: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.891884128s May 21 00:27:22.450: INFO: Verifying statefulset ss doesn't scale past 1 for another 887.639477ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1378 May 21 00:27:23.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:27:23.682: INFO: stderr: "I0521 00:27:23.592845 2905 log.go:172] (0xc000a8f4a0) (0xc000886640) Create stream\nI0521 00:27:23.592889 2905 log.go:172] (0xc000a8f4a0) (0xc000886640) Stream added, broadcasting: 1\nI0521 00:27:23.595971 2905 log.go:172] (0xc000a8f4a0) Reply frame received for 1\nI0521 00:27:23.595999 2905 log.go:172] (0xc000a8f4a0) (0xc000886fa0) Create stream\nI0521 00:27:23.596007 2905 log.go:172] (0xc000a8f4a0) (0xc000886fa0) Stream added, broadcasting: 3\nI0521 00:27:23.597354 2905 log.go:172] (0xc000a8f4a0) Reply frame received for 3\nI0521 00:27:23.597548 2905 log.go:172] (0xc000a8f4a0) (0xc000890f00) Create stream\nI0521 00:27:23.597561 2905 log.go:172] (0xc000a8f4a0) (0xc000890f00) Stream added, broadcasting: 5\nI0521 00:27:23.598752 2905 log.go:172] (0xc000a8f4a0) Reply frame received for 5\nI0521 00:27:23.673514 2905 log.go:172] (0xc000a8f4a0) Data frame received for 5\nI0521 00:27:23.673556 2905 log.go:172] (0xc000890f00) (5) Data frame handling\nI0521 00:27:23.673576 2905 log.go:172] (0xc000890f00) (5) Data frame sent\nI0521 00:27:23.673594 2905 log.go:172] (0xc000a8f4a0) Data frame received for 5\nI0521 00:27:23.673608 2905 log.go:172] (0xc000890f00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0521 00:27:23.673645 2905 log.go:172] (0xc000a8f4a0) Data frame received for 3\nI0521 00:27:23.673663 2905 log.go:172] (0xc000886fa0) (3) Data frame handling\nI0521 00:27:23.673676 2905 log.go:172] (0xc000886fa0) (3) Data frame sent\nI0521 00:27:23.673689 2905 log.go:172] (0xc000a8f4a0) Data frame received for 3\nI0521 00:27:23.673703 2905 log.go:172] (0xc000886fa0) (3) Data frame handling\nI0521 00:27:23.675469 2905 log.go:172] (0xc000a8f4a0) Data frame received for 1\nI0521 00:27:23.675531 2905 log.go:172] (0xc000886640) (1) Data frame handling\nI0521 00:27:23.675556 2905 log.go:172] (0xc000886640) (1) Data frame sent\nI0521 00:27:23.675572 2905 log.go:172] (0xc000a8f4a0) (0xc000886640) Stream removed, broadcasting: 1\nI0521 00:27:23.675589 2905 log.go:172] (0xc000a8f4a0) Go away received\nI0521 00:27:23.676201 2905 log.go:172] (0xc000a8f4a0) (0xc000886640) Stream removed, broadcasting: 1\nI0521 00:27:23.676232 2905 log.go:172] (0xc000a8f4a0) (0xc000886fa0) Stream removed, broadcasting: 3\nI0521 00:27:23.676244 2905 log.go:172] (0xc000a8f4a0) (0xc000890f00) Stream removed, broadcasting: 5\n" May 21 00:27:23.683: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:27:23.683: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:27:23.686: INFO: Found 1 stateful pods, waiting for 3 May 21 00:27:33.692: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 21 00:27:33.692: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 21 00:27:33.692: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 21 00:27:33.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:27:33.941: INFO: stderr: "I0521 00:27:33.840813 2925 log.go:172] (0xc000964000) (0xc000592500) Create stream\nI0521 00:27:33.840896 2925 log.go:172] (0xc000964000) (0xc000592500) Stream added, broadcasting: 1\nI0521 00:27:33.844607 2925 log.go:172] (0xc000964000) Reply frame received for 1\nI0521 00:27:33.844644 2925 log.go:172] (0xc000964000) (0xc0005481e0) Create stream\nI0521 00:27:33.844652 2925 log.go:172] (0xc000964000) (0xc0005481e0) Stream added, broadcasting: 3\nI0521 00:27:33.845648 2925 log.go:172] (0xc000964000) Reply frame received for 3\nI0521 00:27:33.845684 2925 log.go:172] (0xc000964000) (0xc000549180) Create stream\nI0521 00:27:33.845705 2925 log.go:172] (0xc000964000) (0xc000549180) Stream added, broadcasting: 5\nI0521 00:27:33.846665 2925 log.go:172] (0xc000964000) Reply frame received for 5\nI0521 00:27:33.934749 2925 log.go:172] (0xc000964000) Data frame received for 3\nI0521 00:27:33.934801 2925 log.go:172] (0xc0005481e0) (3) Data frame handling\nI0521 00:27:33.934813 2925 log.go:172] (0xc0005481e0) (3) Data frame sent\nI0521 00:27:33.934820 2925 log.go:172] (0xc000964000) Data frame received for 3\nI0521 00:27:33.934826 2925 log.go:172] (0xc0005481e0) (3) Data frame handling\nI0521 00:27:33.934851 2925 log.go:172] (0xc000964000) Data frame received for 5\nI0521 00:27:33.934858 2925 log.go:172] (0xc000549180) (5) Data frame handling\nI0521 00:27:33.934865 2925 log.go:172] (0xc000549180) (5) Data frame sent\nI0521 00:27:33.934870 2925 log.go:172] (0xc000964000) Data frame received for 5\nI0521 00:27:33.934876 2925 log.go:172] (0xc000549180) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:27:33.936390 2925 log.go:172] (0xc000964000) Data frame received for 1\nI0521 00:27:33.936415 2925 log.go:172] (0xc000592500) (1) Data frame handling\nI0521 00:27:33.936452 2925 log.go:172] (0xc000592500) (1) Data frame sent\nI0521 00:27:33.936470 2925 log.go:172] (0xc000964000) (0xc000592500) Stream removed, broadcasting: 1\nI0521 00:27:33.936483 2925 log.go:172] (0xc000964000) Go away received\nI0521 00:27:33.936833 2925 log.go:172] (0xc000964000) (0xc000592500) Stream removed, broadcasting: 1\nI0521 00:27:33.936851 2925 log.go:172] (0xc000964000) (0xc0005481e0) Stream removed, broadcasting: 3\nI0521 00:27:33.936860 2925 log.go:172] (0xc000964000) (0xc000549180) Stream removed, broadcasting: 5\n" May 21 00:27:33.941: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:27:33.941: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:27:33.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:27:34.215: INFO: stderr: "I0521 00:27:34.103315 2945 log.go:172] (0xc00003a6e0) (0xc0006e75e0) Create stream\nI0521 00:27:34.103384 2945 log.go:172] (0xc00003a6e0) (0xc0006e75e0) Stream added, broadcasting: 1\nI0521 00:27:34.106833 2945 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0521 00:27:34.106884 2945 log.go:172] (0xc00003a6e0) (0xc0006ac640) Create stream\nI0521 00:27:34.106901 2945 log.go:172] (0xc00003a6e0) (0xc0006ac640) Stream added, broadcasting: 3\nI0521 00:27:34.108191 2945 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0521 00:27:34.108243 2945 log.go:172] (0xc00003a6e0) (0xc000536e60) Create stream\nI0521 00:27:34.108261 2945 log.go:172] (0xc00003a6e0) (0xc000536e60) Stream added, broadcasting: 5\nI0521 00:27:34.109481 2945 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0521 00:27:34.159134 2945 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0521 00:27:34.159168 2945 log.go:172] (0xc000536e60) (5) Data frame handling\nI0521 00:27:34.159190 2945 log.go:172] (0xc000536e60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:27:34.206326 2945 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0521 00:27:34.206371 2945 log.go:172] (0xc0006ac640) (3) Data frame handling\nI0521 00:27:34.206395 2945 log.go:172] (0xc0006ac640) (3) Data frame sent\nI0521 00:27:34.206414 2945 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0521 00:27:34.206432 2945 log.go:172] (0xc0006ac640) (3) Data frame handling\nI0521 00:27:34.206458 2945 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0521 00:27:34.206488 2945 log.go:172] (0xc000536e60) (5) Data frame handling\nI0521 00:27:34.208775 2945 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0521 00:27:34.208813 2945 log.go:172] (0xc0006e75e0) (1) Data frame handling\nI0521 00:27:34.208835 2945 log.go:172] (0xc0006e75e0) (1) Data frame sent\nI0521 00:27:34.208857 2945 log.go:172] (0xc00003a6e0) (0xc0006e75e0) Stream removed, broadcasting: 1\nI0521 00:27:34.208893 2945 log.go:172] (0xc00003a6e0) Go away received\nI0521 00:27:34.209525 2945 log.go:172] (0xc00003a6e0) (0xc0006e75e0) Stream removed, broadcasting: 1\nI0521 00:27:34.209556 2945 log.go:172] (0xc00003a6e0) (0xc0006ac640) Stream removed, broadcasting: 3\nI0521 00:27:34.209570 2945 log.go:172] (0xc00003a6e0) (0xc000536e60) Stream removed, broadcasting: 5\n" May 21 00:27:34.215: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:27:34.215: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:27:34.215: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:27:34.453: INFO: stderr: "I0521 00:27:34.360810 2967 log.go:172] (0xc000b2cd10) (0xc00098e0a0) Create stream\nI0521 00:27:34.360867 2967 log.go:172] (0xc000b2cd10) (0xc00098e0a0) Stream added, broadcasting: 1\nI0521 00:27:34.362913 2967 log.go:172] (0xc000b2cd10) Reply frame received for 1\nI0521 00:27:34.362949 2967 log.go:172] (0xc000b2cd10) (0xc0009850e0) Create stream\nI0521 00:27:34.362963 2967 log.go:172] (0xc000b2cd10) (0xc0009850e0) Stream added, broadcasting: 3\nI0521 00:27:34.364031 2967 log.go:172] (0xc000b2cd10) Reply frame received for 3\nI0521 00:27:34.364064 2967 log.go:172] (0xc000b2cd10) (0xc000980960) Create stream\nI0521 00:27:34.364076 2967 log.go:172] (0xc000b2cd10) (0xc000980960) Stream added, broadcasting: 5\nI0521 00:27:34.365053 2967 log.go:172] (0xc000b2cd10) Reply frame received for 5\nI0521 00:27:34.419971 2967 log.go:172] (0xc000b2cd10) Data frame received for 5\nI0521 00:27:34.419994 2967 log.go:172] (0xc000980960) (5) Data frame handling\nI0521 00:27:34.420007 2967 log.go:172] (0xc000980960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:27:34.446242 2967 log.go:172] (0xc000b2cd10) Data frame received for 3\nI0521 00:27:34.446271 2967 log.go:172] (0xc0009850e0) (3) Data frame handling\nI0521 00:27:34.446289 2967 log.go:172] (0xc0009850e0) (3) Data frame sent\nI0521 00:27:34.446432 2967 log.go:172] (0xc000b2cd10) Data frame received for 5\nI0521 00:27:34.446471 2967 log.go:172] (0xc000980960) (5) Data frame handling\nI0521 00:27:34.446510 2967 log.go:172] (0xc000b2cd10) Data frame received for 3\nI0521 00:27:34.446531 2967 log.go:172] (0xc0009850e0) (3) Data frame handling\nI0521 00:27:34.448617 2967 log.go:172] (0xc000b2cd10) Data frame received for 1\nI0521 00:27:34.448641 2967 log.go:172] (0xc00098e0a0) (1) Data frame handling\nI0521 00:27:34.448653 2967 log.go:172] (0xc00098e0a0) (1) Data frame sent\nI0521 00:27:34.448661 2967 log.go:172] (0xc000b2cd10) (0xc00098e0a0) Stream removed, broadcasting: 1\nI0521 00:27:34.448673 2967 log.go:172] (0xc000b2cd10) Go away received\nI0521 00:27:34.449359 2967 log.go:172] (0xc000b2cd10) (0xc00098e0a0) Stream removed, broadcasting: 1\nI0521 00:27:34.449379 2967 log.go:172] (0xc000b2cd10) (0xc0009850e0) Stream removed, broadcasting: 3\nI0521 00:27:34.449388 2967 log.go:172] (0xc000b2cd10) (0xc000980960) Stream removed, broadcasting: 5\n" May 21 00:27:34.453: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:27:34.453: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:27:34.453: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:27:34.457: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 21 00:27:44.467: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 00:27:44.467: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 21 00:27:44.467: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 21 00:27:44.532: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999591s May 21 00:27:45.537: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.942550341s May 21 00:27:46.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.937110432s May 21 00:27:47.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.931721813s May 21 00:27:48.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.928043847s May 21 00:27:49.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.922695004s May 21 00:27:50.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.900014443s May 21 00:27:51.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.893784632s May 21 00:27:52.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.888408013s May 21 00:27:53.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 882.733632ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1378 May 21 00:27:54.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:27:54.953: INFO: stderr: "I0521 00:27:54.797863 2987 log.go:172] (0xc000a1ac60) (0xc0003d6be0) Create stream\nI0521 00:27:54.797914 2987 log.go:172] (0xc000a1ac60) (0xc0003d6be0) Stream added, broadcasting: 1\nI0521 00:27:54.800196 2987 log.go:172] (0xc000a1ac60) Reply frame received for 1\nI0521 00:27:54.800238 2987 log.go:172] (0xc000a1ac60) (0xc0003d74a0) Create stream\nI0521 00:27:54.800253 2987 log.go:172] (0xc000a1ac60) (0xc0003d74a0) Stream added, broadcasting: 3\nI0521 00:27:54.801243 2987 log.go:172] (0xc000a1ac60) Reply frame received for 3\nI0521 00:27:54.801284 2987 log.go:172] (0xc000a1ac60) (0xc0000c4280) Create stream\nI0521 00:27:54.801300 2987 log.go:172] (0xc000a1ac60) (0xc0000c4280) Stream added, broadcasting: 5\nI0521 00:27:54.802108 2987 log.go:172] (0xc000a1ac60) Reply frame received for 5\nI0521 00:27:54.946506 2987 log.go:172] (0xc000a1ac60) Data frame received for 3\nI0521 00:27:54.946527 2987 log.go:172] (0xc0003d74a0) (3) Data frame handling\nI0521 00:27:54.946536 2987 log.go:172] (0xc0003d74a0) (3) Data frame sent\nI0521 00:27:54.946574 2987 log.go:172] (0xc000a1ac60) Data frame received for 5\nI0521 00:27:54.946604 2987 log.go:172] (0xc0000c4280) (5) Data frame handling\nI0521 00:27:54.946626 2987 log.go:172] (0xc0000c4280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0521 00:27:54.946653 2987 log.go:172] (0xc000a1ac60) Data frame received for 3\nI0521 00:27:54.946669 2987 log.go:172] (0xc0003d74a0) (3) Data frame handling\nI0521 00:27:54.946790 2987 log.go:172] (0xc000a1ac60) Data frame received for 5\nI0521 00:27:54.946801 2987 log.go:172] (0xc0000c4280) (5) Data frame handling\nI0521 00:27:54.948231 2987 log.go:172] (0xc000a1ac60) Data frame received for 1\nI0521 00:27:54.948248 2987 log.go:172] (0xc0003d6be0) (1) Data frame handling\nI0521 00:27:54.948283 2987 log.go:172] (0xc0003d6be0) (1) Data frame sent\nI0521 00:27:54.948302 2987 log.go:172] (0xc000a1ac60) (0xc0003d6be0) Stream removed, broadcasting: 1\nI0521 00:27:54.948352 2987 log.go:172] (0xc000a1ac60) Go away received\nI0521 00:27:54.948589 2987 log.go:172] (0xc000a1ac60) (0xc0003d6be0) Stream removed, broadcasting: 1\nI0521 00:27:54.948607 2987 log.go:172] (0xc000a1ac60) (0xc0003d74a0) Stream removed, broadcasting: 3\nI0521 00:27:54.948618 2987 log.go:172] (0xc000a1ac60) (0xc0000c4280) Stream removed, broadcasting: 5\n" May 21 00:27:54.953: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:27:54.953: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:27:54.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:27:55.173: INFO: stderr: "I0521 00:27:55.096259 3007 log.go:172] (0xc000af1080) (0xc000ae05a0) Create stream\nI0521 00:27:55.096303 3007 log.go:172] (0xc000af1080) (0xc000ae05a0) Stream added, broadcasting: 1\nI0521 00:27:55.101567 3007 log.go:172] (0xc000af1080) Reply frame received for 1\nI0521 00:27:55.101602 3007 log.go:172] (0xc000af1080) (0xc0009345a0) Create stream\nI0521 00:27:55.101617 3007 log.go:172] (0xc000af1080) (0xc0009345a0) Stream added, broadcasting: 3\nI0521 00:27:55.102347 3007 log.go:172] (0xc000af1080) Reply frame received for 3\nI0521 00:27:55.102372 3007 log.go:172] (0xc000af1080) (0xc000924640) Create stream\nI0521 00:27:55.102390 3007 log.go:172] (0xc000af1080) (0xc000924640) Stream added, broadcasting: 5\nI0521 00:27:55.103300 3007 log.go:172] (0xc000af1080) Reply frame received for 5\nI0521 00:27:55.167043 3007 log.go:172] (0xc000af1080) Data frame received for 5\nI0521 00:27:55.167091 3007 log.go:172] (0xc000924640) (5) Data frame handling\nI0521 00:27:55.167114 3007 log.go:172] (0xc000924640) (5) Data frame sent\nI0521 00:27:55.167131 3007 log.go:172] (0xc000af1080) Data frame received for 5\nI0521 00:27:55.167146 3007 log.go:172] (0xc000924640) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0521 00:27:55.167193 3007 log.go:172] (0xc000af1080) Data frame received for 3\nI0521 00:27:55.167210 3007 log.go:172] (0xc0009345a0) (3) Data frame handling\nI0521 00:27:55.167226 3007 log.go:172] (0xc0009345a0) (3) Data frame sent\nI0521 00:27:55.167414 3007 log.go:172] (0xc000af1080) Data frame received for 3\nI0521 00:27:55.167511 3007 log.go:172] (0xc0009345a0) (3) Data frame handling\nI0521 00:27:55.168995 3007 log.go:172] (0xc000af1080) Data frame received for 1\nI0521 00:27:55.169010 3007 log.go:172] (0xc000ae05a0) (1) Data frame handling\nI0521 00:27:55.169020 3007 log.go:172] (0xc000ae05a0) (1) Data frame sent\nI0521 00:27:55.169395 3007 log.go:172] (0xc000af1080) (0xc000ae05a0) Stream removed, broadcasting: 1\nI0521 00:27:55.169448 3007 log.go:172] (0xc000af1080) Go away received\nI0521 00:27:55.169681 3007 log.go:172] (0xc000af1080) (0xc000ae05a0) Stream removed, broadcasting: 1\nI0521 00:27:55.169696 3007 log.go:172] (0xc000af1080) (0xc0009345a0) Stream removed, broadcasting: 3\nI0521 00:27:55.169701 3007 log.go:172] (0xc000af1080) (0xc000924640) Stream removed, broadcasting: 5\n" May 21 00:27:55.173: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:27:55.173: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:27:55.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1378 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:27:55.374: INFO: stderr: "I0521 00:27:55.296391 3025 log.go:172] (0xc0009613f0) (0xc00064b4a0) Create stream\nI0521 00:27:55.296463 3025 log.go:172] (0xc0009613f0) (0xc00064b4a0) Stream added, broadcasting: 1\nI0521 00:27:55.301355 3025 log.go:172] (0xc0009613f0) Reply frame received for 1\nI0521 00:27:55.301390 3025 log.go:172] (0xc0009613f0) (0xc0005101e0) Create stream\nI0521 00:27:55.301405 3025 log.go:172] (0xc0009613f0) (0xc0005101e0) Stream added, broadcasting: 3\nI0521 00:27:55.302316 3025 log.go:172] (0xc0009613f0) Reply frame received for 3\nI0521 00:27:55.302341 3025 log.go:172] (0xc0009613f0) (0xc0004f8140) Create stream\nI0521 00:27:55.302353 3025 log.go:172] (0xc0009613f0) (0xc0004f8140) Stream added, broadcasting: 5\nI0521 00:27:55.303087 3025 log.go:172] (0xc0009613f0) Reply frame received for 5\nI0521 00:27:55.367238 3025 log.go:172] (0xc0009613f0) Data frame received for 3\nI0521 00:27:55.367262 3025 log.go:172] (0xc0005101e0) (3) Data frame handling\nI0521 00:27:55.367278 3025 log.go:172] (0xc0005101e0) (3) Data frame sent\nI0521 00:27:55.367284 3025 log.go:172] (0xc0009613f0) Data frame received for 3\nI0521 00:27:55.367290 3025 log.go:172] (0xc0005101e0) (3) Data frame handling\nI0521 00:27:55.368085 3025 log.go:172] (0xc0009613f0) Data frame received for 5\nI0521 00:27:55.368113 3025 log.go:172] (0xc0004f8140) (5) Data frame handling\nI0521 00:27:55.368137 3025 log.go:172] (0xc0004f8140) (5) Data frame sent\nI0521 00:27:55.368148 3025 log.go:172] (0xc0009613f0) Data frame received for 5\nI0521 00:27:55.368164 3025 log.go:172] (0xc0004f8140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0521 00:27:55.369450 3025 log.go:172] (0xc0009613f0) Data frame received for 1\nI0521 00:27:55.369473 3025 log.go:172] (0xc00064b4a0) (1) Data frame handling\nI0521 00:27:55.369487 3025 log.go:172] (0xc00064b4a0) (1) Data frame sent\nI0521 00:27:55.369501 3025 log.go:172] (0xc0009613f0) (0xc00064b4a0) Stream removed, broadcasting: 1\nI0521 00:27:55.369521 3025 log.go:172] (0xc0009613f0) Go away received\nI0521 00:27:55.369863 3025 log.go:172] (0xc0009613f0) (0xc00064b4a0) Stream removed, broadcasting: 1\nI0521 00:27:55.369882 3025 log.go:172] (0xc0009613f0) (0xc0005101e0) Stream removed, broadcasting: 3\nI0521 00:27:55.369890 3025 log.go:172] (0xc0009613f0) (0xc0004f8140) Stream removed, broadcasting: 5\n" May 21 00:27:55.374: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:27:55.374: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:27:55.374: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 00:28:25.414: INFO: Deleting all statefulset in ns statefulset-1378 May 21 00:28:25.418: INFO: Scaling statefulset ss to 0 May 21 00:28:25.427: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:28:25.430: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:28:25.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1378" for this suite. • [SLOW TEST:92.868 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":149,"skipped":2548,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:28:25.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-60fbc6f8-c0e5-41b8-9825-2406ec66617c STEP: Creating a pod to test consume configMaps May 21 00:28:25.604: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63" in namespace "projected-9082" to be "Succeeded or Failed" May 21 00:28:25.626: INFO: Pod "pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63": Phase="Pending", Reason="", readiness=false. Elapsed: 21.205622ms May 21 00:28:27.630: INFO: Pod "pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025468985s May 21 00:28:29.663: INFO: Pod "pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63": Phase="Running", Reason="", readiness=true. Elapsed: 4.058666236s May 21 00:28:31.667: INFO: Pod "pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062969326s STEP: Saw pod success May 21 00:28:31.667: INFO: Pod "pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63" satisfied condition "Succeeded or Failed" May 21 00:28:31.671: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63 container projected-configmap-volume-test: STEP: delete the pod May 21 00:28:31.755: INFO: Waiting for pod pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63 to disappear May 21 00:28:31.764: INFO: Pod pod-projected-configmaps-c56cdd21-bac7-4938-8090-d7f024926f63 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:28:31.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9082" for this suite. • [SLOW TEST:6.306 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":150,"skipped":2550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:28:31.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-97779074-146c-49a5-8298-a97edf8f1af6 STEP: Creating secret with name secret-projected-all-test-volume-b0b952fb-1e6c-4589-8b92-56d08ac66dcb STEP: Creating a pod to test Check all projections for projected volume plugin May 21 00:28:31.895: INFO: Waiting up to 5m0s for pod "projected-volume-c623be36-5a06-4458-8149-63987f9d537e" in namespace "projected-7665" to be "Succeeded or Failed" May 21 00:28:31.897: INFO: Pod "projected-volume-c623be36-5a06-4458-8149-63987f9d537e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294896ms May 21 00:28:33.911: INFO: Pod "projected-volume-c623be36-5a06-4458-8149-63987f9d537e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015649082s May 21 00:28:35.914: INFO: Pod "projected-volume-c623be36-5a06-4458-8149-63987f9d537e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01938264s STEP: Saw pod success May 21 00:28:35.914: INFO: Pod "projected-volume-c623be36-5a06-4458-8149-63987f9d537e" satisfied condition "Succeeded or Failed" May 21 00:28:35.917: INFO: Trying to get logs from node latest-worker pod projected-volume-c623be36-5a06-4458-8149-63987f9d537e container projected-all-volume-test: STEP: delete the pod May 21 00:28:35.998: INFO: Waiting for pod projected-volume-c623be36-5a06-4458-8149-63987f9d537e to disappear May 21 00:28:36.001: INFO: Pod projected-volume-c623be36-5a06-4458-8149-63987f9d537e no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:28:36.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7665" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":151,"skipped":2577,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:28:36.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8795 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 21 00:28:36.318: INFO: Found 0 stateful pods, waiting for 3 May 21 00:28:46.335: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 00:28:46.335: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 00:28:46.335: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 21 00:28:56.324: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 00:28:56.324: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 00:28:56.324: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 21 00:28:56.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8795 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:28:56.637: INFO: stderr: "I0521 00:28:56.484157 3040 log.go:172] (0xc000beb340) (0xc00081fcc0) Create stream\nI0521 00:28:56.484213 3040 log.go:172] (0xc000beb340) (0xc00081fcc0) Stream added, broadcasting: 1\nI0521 00:28:56.487021 3040 log.go:172] (0xc000beb340) Reply frame received for 1\nI0521 00:28:56.487068 3040 log.go:172] (0xc000beb340) (0xc0005e1cc0) Create stream\nI0521 00:28:56.487085 3040 log.go:172] (0xc000beb340) (0xc0005e1cc0) Stream added, broadcasting: 3\nI0521 00:28:56.488033 3040 log.go:172] (0xc000beb340) Reply frame received for 3\nI0521 00:28:56.488066 3040 log.go:172] (0xc000beb340) (0xc00082a6e0) Create stream\nI0521 00:28:56.488075 3040 log.go:172] (0xc000beb340) (0xc00082a6e0) Stream added, broadcasting: 5\nI0521 00:28:56.489091 3040 log.go:172] (0xc000beb340) Reply frame received for 5\nI0521 00:28:56.596587 3040 log.go:172] (0xc000beb340) Data frame received for 5\nI0521 00:28:56.596623 3040 log.go:172] (0xc00082a6e0) (5) Data frame handling\nI0521 00:28:56.596651 3040 log.go:172] (0xc00082a6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:28:56.629804 3040 log.go:172] (0xc000beb340) Data frame received for 3\nI0521 00:28:56.629837 3040 log.go:172] (0xc0005e1cc0) (3) Data frame handling\nI0521 00:28:56.629855 3040 log.go:172] (0xc0005e1cc0) (3) Data frame sent\nI0521 00:28:56.629870 3040 log.go:172] (0xc000beb340) Data frame received for 3\nI0521 00:28:56.629890 3040 log.go:172] (0xc000beb340) Data frame received for 5\nI0521 00:28:56.629906 3040 log.go:172] (0xc00082a6e0) (5) Data frame handling\nI0521 00:28:56.629936 3040 log.go:172] (0xc0005e1cc0) (3) Data frame handling\nI0521 00:28:56.631690 3040 log.go:172] (0xc000beb340) Data frame received for 1\nI0521 00:28:56.631713 3040 log.go:172] (0xc00081fcc0) (1) Data frame handling\nI0521 00:28:56.631736 3040 log.go:172] (0xc00081fcc0) (1) Data frame sent\nI0521 00:28:56.631752 3040 log.go:172] (0xc000beb340) (0xc00081fcc0) Stream removed, broadcasting: 1\nI0521 00:28:56.631914 3040 log.go:172] (0xc000beb340) Go away received\nI0521 00:28:56.632023 3040 log.go:172] (0xc000beb340) (0xc00081fcc0) Stream removed, broadcasting: 1\nI0521 00:28:56.632042 3040 log.go:172] (0xc000beb340) (0xc0005e1cc0) Stream removed, broadcasting: 3\nI0521 00:28:56.632048 3040 log.go:172] (0xc000beb340) (0xc00082a6e0) Stream removed, broadcasting: 5\n" May 21 00:28:56.637: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:28:56.638: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 21 00:29:06.674: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 21 00:29:16.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8795 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:29:16.961: INFO: stderr: "I0521 00:29:16.875087 3062 log.go:172] (0xc000ae54a0) (0xc0007365a0) Create stream\nI0521 00:29:16.875148 3062 log.go:172] (0xc000ae54a0) (0xc0007365a0) Stream added, broadcasting: 1\nI0521 00:29:16.880493 3062 log.go:172] (0xc000ae54a0) Reply frame received for 1\nI0521 00:29:16.880539 3062 log.go:172] (0xc000ae54a0) (0xc0007274a0) Create stream\nI0521 00:29:16.880553 3062 log.go:172] (0xc000ae54a0) (0xc0007274a0) Stream added, broadcasting: 3\nI0521 00:29:16.881971 3062 log.go:172] (0xc000ae54a0) Reply frame received for 3\nI0521 00:29:16.882038 3062 log.go:172] (0xc000ae54a0) (0xc000710c80) Create stream\nI0521 00:29:16.882059 3062 log.go:172] (0xc000ae54a0) (0xc000710c80) Stream added, broadcasting: 5\nI0521 00:29:16.883163 3062 log.go:172] (0xc000ae54a0) Reply frame received for 5\nI0521 00:29:16.954944 3062 log.go:172] (0xc000ae54a0) Data frame received for 5\nI0521 00:29:16.954974 3062 log.go:172] (0xc000710c80) (5) Data frame handling\nI0521 00:29:16.955001 3062 log.go:172] (0xc000710c80) (5) Data frame sent\nI0521 00:29:16.955012 3062 log.go:172] (0xc000ae54a0) Data frame received for 5\nI0521 00:29:16.955023 3062 log.go:172] (0xc000710c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0521 00:29:16.955068 3062 log.go:172] (0xc000ae54a0) Data frame received for 3\nI0521 00:29:16.955089 3062 log.go:172] (0xc0007274a0) (3) Data frame handling\nI0521 00:29:16.955104 3062 log.go:172] (0xc0007274a0) (3) Data frame sent\nI0521 00:29:16.955118 3062 log.go:172] (0xc000ae54a0) Data frame received for 3\nI0521 00:29:16.955129 3062 log.go:172] (0xc0007274a0) (3) Data frame handling\nI0521 00:29:16.956439 3062 log.go:172] (0xc000ae54a0) Data frame received for 1\nI0521 00:29:16.956482 3062 log.go:172] (0xc0007365a0) (1) Data frame handling\nI0521 00:29:16.956501 3062 log.go:172] (0xc0007365a0) (1) Data frame sent\nI0521 00:29:16.956526 3062 log.go:172] (0xc000ae54a0) (0xc0007365a0) Stream removed, broadcasting: 1\nI0521 00:29:16.956587 3062 log.go:172] (0xc000ae54a0) Go away received\nI0521 00:29:16.957058 3062 log.go:172] (0xc000ae54a0) (0xc0007365a0) Stream removed, broadcasting: 1\nI0521 00:29:16.957086 3062 log.go:172] (0xc000ae54a0) (0xc0007274a0) Stream removed, broadcasting: 3\nI0521 00:29:16.957099 3062 log.go:172] (0xc000ae54a0) (0xc000710c80) Stream removed, broadcasting: 5\n" May 21 00:29:16.961: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:29:16.961: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:29:27.166: INFO: Waiting for StatefulSet statefulset-8795/ss2 to complete update May 21 00:29:27.166: INFO: Waiting for Pod statefulset-8795/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 00:29:27.166: INFO: Waiting for Pod statefulset-8795/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 00:29:37.175: INFO: Waiting for StatefulSet statefulset-8795/ss2 to complete update May 21 00:29:37.175: INFO: Waiting for Pod statefulset-8795/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 00:29:47.175: INFO: Waiting for StatefulSet statefulset-8795/ss2 to complete update STEP: Rolling back to a previous revision May 21 00:29:57.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8795 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:29:57.674: INFO: stderr: "I0521 00:29:57.331125 3084 log.go:172] (0xc000989550) (0xc000ac8500) Create stream\nI0521 00:29:57.331180 3084 log.go:172] (0xc000989550) (0xc000ac8500) Stream added, broadcasting: 1\nI0521 00:29:57.335530 3084 log.go:172] (0xc000989550) Reply frame received for 1\nI0521 00:29:57.335579 3084 log.go:172] (0xc000989550) (0xc00024d0e0) Create stream\nI0521 00:29:57.335592 3084 log.go:172] (0xc000989550) (0xc00024d0e0) Stream added, broadcasting: 3\nI0521 00:29:57.336478 3084 log.go:172] (0xc000989550) Reply frame received for 3\nI0521 00:29:57.336531 3084 log.go:172] (0xc000989550) (0xc000828dc0) Create stream\nI0521 00:29:57.336550 3084 log.go:172] (0xc000989550) (0xc000828dc0) Stream added, broadcasting: 5\nI0521 00:29:57.337781 3084 log.go:172] (0xc000989550) Reply frame received for 5\nI0521 00:29:57.639279 3084 log.go:172] (0xc000989550) Data frame received for 5\nI0521 00:29:57.639299 3084 log.go:172] (0xc000828dc0) (5) Data frame handling\nI0521 00:29:57.639317 3084 log.go:172] (0xc000828dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:29:57.668056 3084 log.go:172] (0xc000989550) Data frame received for 3\nI0521 00:29:57.668080 3084 log.go:172] (0xc00024d0e0) (3) Data frame handling\nI0521 00:29:57.668089 3084 log.go:172] (0xc00024d0e0) (3) Data frame sent\nI0521 00:29:57.668095 3084 log.go:172] (0xc000989550) Data frame received for 3\nI0521 00:29:57.668107 3084 log.go:172] (0xc000989550) Data frame received for 5\nI0521 00:29:57.668127 3084 log.go:172] (0xc000828dc0) (5) Data frame handling\nI0521 00:29:57.668137 3084 log.go:172] (0xc00024d0e0) (3) Data frame handling\nI0521 00:29:57.670143 3084 log.go:172] (0xc000989550) Data frame received for 1\nI0521 00:29:57.670160 3084 log.go:172] (0xc000ac8500) (1) Data frame handling\nI0521 00:29:57.670172 3084 log.go:172] (0xc000ac8500) (1) Data frame sent\nI0521 00:29:57.670183 3084 log.go:172] (0xc000989550) (0xc000ac8500) Stream removed, broadcasting: 1\nI0521 00:29:57.670200 3084 log.go:172] (0xc000989550) Go away received\nI0521 00:29:57.670371 3084 log.go:172] (0xc000989550) (0xc000ac8500) Stream removed, broadcasting: 1\nI0521 00:29:57.670382 3084 log.go:172] (0xc000989550) (0xc00024d0e0) Stream removed, broadcasting: 3\nI0521 00:29:57.670387 3084 log.go:172] (0xc000989550) (0xc000828dc0) Stream removed, broadcasting: 5\n" May 21 00:29:57.674: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:29:57.674: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:30:07.706: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 21 00:30:17.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-8795 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:30:17.940: INFO: stderr: "I0521 00:30:17.854357 3103 log.go:172] (0xc000c1a0b0) (0xc00023c3c0) Create stream\nI0521 00:30:17.854411 3103 log.go:172] (0xc000c1a0b0) (0xc00023c3c0) Stream added, broadcasting: 1\nI0521 00:30:17.856892 3103 log.go:172] (0xc000c1a0b0) Reply frame received for 1\nI0521 00:30:17.856928 3103 log.go:172] (0xc000c1a0b0) (0xc000139540) Create stream\nI0521 00:30:17.856943 3103 log.go:172] (0xc000c1a0b0) (0xc000139540) Stream added, broadcasting: 3\nI0521 00:30:17.857977 3103 log.go:172] (0xc000c1a0b0) Reply frame received for 3\nI0521 00:30:17.858019 3103 log.go:172] (0xc000c1a0b0) (0xc0003c8460) Create stream\nI0521 00:30:17.858381 3103 log.go:172] (0xc000c1a0b0) (0xc0003c8460) Stream added, broadcasting: 5\nI0521 00:30:17.860236 3103 log.go:172] (0xc000c1a0b0) Reply frame received for 5\nI0521 00:30:17.930477 3103 log.go:172] (0xc000c1a0b0) Data frame received for 3\nI0521 00:30:17.930618 3103 log.go:172] (0xc000139540) (3) Data frame handling\nI0521 00:30:17.930655 3103 log.go:172] (0xc000139540) (3) Data frame sent\nI0521 00:30:17.930677 3103 log.go:172] (0xc000c1a0b0) Data frame received for 3\nI0521 00:30:17.930697 3103 log.go:172] (0xc000139540) (3) Data frame handling\nI0521 00:30:17.930722 3103 log.go:172] (0xc000c1a0b0) Data frame received for 5\nI0521 00:30:17.930741 3103 log.go:172] (0xc0003c8460) (5) Data frame handling\nI0521 00:30:17.930762 3103 log.go:172] (0xc0003c8460) (5) Data frame sent\nI0521 00:30:17.930783 3103 log.go:172] (0xc000c1a0b0) Data frame received for 5\nI0521 00:30:17.930808 3103 log.go:172] (0xc0003c8460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0521 00:30:17.932236 3103 log.go:172] (0xc000c1a0b0) Data frame received for 1\nI0521 00:30:17.932272 3103 log.go:172] (0xc00023c3c0) (1) Data frame handling\nI0521 00:30:17.932308 3103 log.go:172] (0xc00023c3c0) (1) Data frame sent\nI0521 00:30:17.932346 3103 log.go:172] (0xc000c1a0b0) (0xc00023c3c0) Stream removed, broadcasting: 1\nI0521 00:30:17.932378 3103 log.go:172] (0xc000c1a0b0) Go away received\nI0521 00:30:17.933340 3103 log.go:172] (0xc000c1a0b0) (0xc00023c3c0) Stream removed, broadcasting: 1\nI0521 00:30:17.933378 3103 log.go:172] (0xc000c1a0b0) (0xc000139540) Stream removed, broadcasting: 3\nI0521 00:30:17.933397 3103 log.go:172] (0xc000c1a0b0) (0xc0003c8460) Stream removed, broadcasting: 5\n" May 21 00:30:17.940: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:30:17.940: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:30:47.977: INFO: Waiting for StatefulSet statefulset-8795/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 00:30:57.984: INFO: Deleting all statefulset in ns statefulset-8795 May 21 00:30:57.987: INFO: Scaling statefulset ss2 to 0 May 21 00:31:28.004: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:31:28.007: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:31:28.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8795" for this suite. • [SLOW TEST:172.007 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":152,"skipped":2587,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:31:28.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 00:31:28.094: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 00:31:28.135: INFO: Waiting for terminating namespaces to be deleted... May 21 00:31:28.138: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 21 00:31:28.143: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 21 00:31:28.143: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 21 00:31:28.143: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 21 00:31:28.143: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 21 00:31:28.143: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:31:28.143: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:31:28.143: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:31:28.143: INFO: Container kube-proxy ready: true, restart count 0 May 21 00:31:28.143: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 21 00:31:28.148: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 21 00:31:28.148: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 21 00:31:28.148: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 21 00:31:28.148: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 21 00:31:28.148: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:31:28.148: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:31:28.148: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:31:28.148: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 21 00:31:28.266: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 21 00:31:28.266: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 21 00:31:28.266: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 21 00:31:28.266: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 21 00:31:28.266: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 21 00:31:28.266: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 21 00:31:28.266: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 21 00:31:28.275: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-161d5949-90e1-402a-9d68-5c29a8d17b04.1610e3c16aa3569f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7959/filler-pod-161d5949-90e1-402a-9d68-5c29a8d17b04 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-161d5949-90e1-402a-9d68-5c29a8d17b04.1610e3c1f1119644], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-161d5949-90e1-402a-9d68-5c29a8d17b04.1610e3c22d2b96c4], Reason = [Created], Message = [Created container filler-pod-161d5949-90e1-402a-9d68-5c29a8d17b04] STEP: Considering event: Type = [Normal], Name = [filler-pod-161d5949-90e1-402a-9d68-5c29a8d17b04.1610e3c23f163d20], Reason = [Started], Message = [Started container filler-pod-161d5949-90e1-402a-9d68-5c29a8d17b04] STEP: Considering event: Type = [Normal], Name = [filler-pod-23dc7ec5-5a72-442b-a106-ae75abc87cce.1610e3c1693918e4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7959/filler-pod-23dc7ec5-5a72-442b-a106-ae75abc87cce to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-23dc7ec5-5a72-442b-a106-ae75abc87cce.1610e3c1b6709cd4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-23dc7ec5-5a72-442b-a106-ae75abc87cce.1610e3c208b36877], Reason = [Created], Message = [Created container filler-pod-23dc7ec5-5a72-442b-a106-ae75abc87cce] STEP: Considering event: Type = [Normal], Name = [filler-pod-23dc7ec5-5a72-442b-a106-ae75abc87cce.1610e3c22886219e], Reason = [Started], Message = [Started container filler-pod-23dc7ec5-5a72-442b-a106-ae75abc87cce] STEP: Considering event: Type = [Warning], Name = [additional-pod.1610e3c2607e9786], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1610e3c262789e55], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:31:33.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7959" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:5.570 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":153,"skipped":2589,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:31:33.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 00:31:33.702: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:31:42.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8382" for this suite. • [SLOW TEST:8.722 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":154,"skipped":2605,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:31:42.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:31:42.385: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-6b30cfe9-76c7-42d9-98b2-cde4cee7db22" in namespace "security-context-test-886" to be "Succeeded or Failed" May 21 00:31:42.400: INFO: Pod "alpine-nnp-false-6b30cfe9-76c7-42d9-98b2-cde4cee7db22": Phase="Pending", Reason="", readiness=false. Elapsed: 15.45284ms May 21 00:31:44.503: INFO: Pod "alpine-nnp-false-6b30cfe9-76c7-42d9-98b2-cde4cee7db22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118529122s May 21 00:31:46.507: INFO: Pod "alpine-nnp-false-6b30cfe9-76c7-42d9-98b2-cde4cee7db22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122548733s May 21 00:31:46.508: INFO: Pod "alpine-nnp-false-6b30cfe9-76c7-42d9-98b2-cde4cee7db22" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:31:46.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-886" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":155,"skipped":2621,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:31:46.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 21 00:31:47.398: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 21 00:31:49.457: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617907, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617907, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617907, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617907, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:31:52.494: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:31:52.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:31:53.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7673" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.345 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":156,"skipped":2624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:31:53.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:31:55.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:31:57.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617915, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617915, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617915, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725617915, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:32:00.170: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:32:00.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-721" for this suite. STEP: Destroying namespace "webhook-721-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.448 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":157,"skipped":2664,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:32:00.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:32:00.452: INFO: Waiting up to 5m0s for pod "downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1" in namespace "projected-949" to be "Succeeded or Failed" May 21 00:32:00.480: INFO: Pod "downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1": Phase="Pending", Reason="", readiness=false. Elapsed: 27.770767ms May 21 00:32:02.484: INFO: Pod "downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032191424s May 21 00:32:04.488: INFO: Pod "downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036287479s STEP: Saw pod success May 21 00:32:04.488: INFO: Pod "downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1" satisfied condition "Succeeded or Failed" May 21 00:32:04.499: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1 container client-container: STEP: delete the pod May 21 00:32:04.661: INFO: Waiting for pod downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1 to disappear May 21 00:32:04.695: INFO: Pod downwardapi-volume-138a9b53-3de1-4f24-96a2-9f2297f015c1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:32:04.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-949" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":158,"skipped":2671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:32:04.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5062 STEP: creating service affinity-clusterip-transition in namespace services-5062 STEP: creating replication controller affinity-clusterip-transition in namespace services-5062 I0521 00:32:04.887238 8 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5062, replica count: 3 I0521 00:32:07.937651 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:32:10.937903 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 00:32:10.945: INFO: Creating new exec pod May 21 00:32:15.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5062 execpod-affinityjzdbv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 21 00:32:19.052: INFO: stderr: "I0521 00:32:18.926250 3125 log.go:172] (0xc000d720b0) (0xc0006e8f00) Create stream\nI0521 00:32:18.926288 3125 log.go:172] (0xc000d720b0) (0xc0006e8f00) Stream added, broadcasting: 1\nI0521 00:32:18.928304 3125 log.go:172] (0xc000d720b0) Reply frame received for 1\nI0521 00:32:18.928372 3125 log.go:172] (0xc000d720b0) (0xc0006e9ea0) Create stream\nI0521 00:32:18.928392 3125 log.go:172] (0xc000d720b0) (0xc0006e9ea0) Stream added, broadcasting: 3\nI0521 00:32:18.929752 3125 log.go:172] (0xc000d720b0) Reply frame received for 3\nI0521 00:32:18.929811 3125 log.go:172] (0xc000d720b0) (0xc0006d4780) Create stream\nI0521 00:32:18.929832 3125 log.go:172] (0xc000d720b0) (0xc0006d4780) Stream added, broadcasting: 5\nI0521 00:32:18.931165 3125 log.go:172] (0xc000d720b0) Reply frame received for 5\nI0521 00:32:19.024419 3125 log.go:172] (0xc000d720b0) Data frame received for 5\nI0521 00:32:19.024448 3125 log.go:172] (0xc0006d4780) (5) Data frame handling\nI0521 00:32:19.024468 3125 log.go:172] (0xc0006d4780) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0521 00:32:19.042788 3125 log.go:172] (0xc000d720b0) Data frame received for 5\nI0521 00:32:19.042813 3125 log.go:172] (0xc0006d4780) (5) Data frame handling\nI0521 00:32:19.042835 3125 log.go:172] (0xc0006d4780) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0521 00:32:19.043206 3125 log.go:172] (0xc000d720b0) Data frame received for 5\nI0521 00:32:19.043230 3125 log.go:172] (0xc0006d4780) (5) Data frame handling\nI0521 00:32:19.043777 3125 log.go:172] (0xc000d720b0) Data frame received for 3\nI0521 00:32:19.043793 3125 log.go:172] (0xc0006e9ea0) (3) Data frame handling\nI0521 00:32:19.046070 3125 log.go:172] (0xc000d720b0) Data frame received for 1\nI0521 00:32:19.046109 3125 log.go:172] (0xc0006e8f00) (1) Data frame handling\nI0521 00:32:19.046147 3125 log.go:172] (0xc0006e8f00) (1) Data frame sent\nI0521 00:32:19.046184 3125 log.go:172] (0xc000d720b0) (0xc0006e8f00) Stream removed, broadcasting: 1\nI0521 00:32:19.046224 3125 log.go:172] (0xc000d720b0) Go away received\nI0521 00:32:19.046879 3125 log.go:172] (0xc000d720b0) (0xc0006e8f00) Stream removed, broadcasting: 1\nI0521 00:32:19.046903 3125 log.go:172] (0xc000d720b0) (0xc0006e9ea0) Stream removed, broadcasting: 3\nI0521 00:32:19.046915 3125 log.go:172] (0xc000d720b0) (0xc0006d4780) Stream removed, broadcasting: 5\n" May 21 00:32:19.052: INFO: stdout: "" May 21 00:32:19.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5062 execpod-affinityjzdbv -- /bin/sh -x -c nc -zv -t -w 2 10.97.97.154 80' May 21 00:32:19.254: INFO: stderr: "I0521 00:32:19.190687 3157 log.go:172] (0xc00098d810) (0xc000a746e0) Create stream\nI0521 00:32:19.190776 3157 log.go:172] (0xc00098d810) (0xc000a746e0) Stream added, broadcasting: 1\nI0521 00:32:19.195462 3157 log.go:172] (0xc00098d810) Reply frame received for 1\nI0521 00:32:19.195503 3157 log.go:172] (0xc00098d810) (0xc0005d8500) Create stream\nI0521 00:32:19.195514 3157 log.go:172] (0xc00098d810) (0xc0005d8500) Stream added, broadcasting: 3\nI0521 00:32:19.196408 3157 log.go:172] (0xc00098d810) Reply frame received for 3\nI0521 00:32:19.196437 3157 log.go:172] (0xc00098d810) (0xc0005d8a00) Create stream\nI0521 00:32:19.196443 3157 log.go:172] (0xc00098d810) (0xc0005d8a00) Stream added, broadcasting: 5\nI0521 00:32:19.197530 3157 log.go:172] (0xc00098d810) Reply frame received for 5\nI0521 00:32:19.248625 3157 log.go:172] (0xc00098d810) Data frame received for 5\nI0521 00:32:19.248672 3157 log.go:172] (0xc0005d8a00) (5) Data frame handling\nI0521 00:32:19.248687 3157 log.go:172] (0xc0005d8a00) (5) Data frame sent\nI0521 00:32:19.248697 3157 log.go:172] (0xc00098d810) Data frame received for 5\nI0521 00:32:19.248705 3157 log.go:172] (0xc0005d8a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.97.154 80\nConnection to 10.97.97.154 80 port [tcp/http] succeeded!\nI0521 00:32:19.248732 3157 log.go:172] (0xc00098d810) Data frame received for 3\nI0521 00:32:19.248746 3157 log.go:172] (0xc0005d8500) (3) Data frame handling\nI0521 00:32:19.250382 3157 log.go:172] (0xc00098d810) Data frame received for 1\nI0521 00:32:19.250407 3157 log.go:172] (0xc000a746e0) (1) Data frame handling\nI0521 00:32:19.250421 3157 log.go:172] (0xc000a746e0) (1) Data frame sent\nI0521 00:32:19.250436 3157 log.go:172] (0xc00098d810) (0xc000a746e0) Stream removed, broadcasting: 1\nI0521 00:32:19.250605 3157 log.go:172] (0xc00098d810) Go away received\nI0521 00:32:19.250749 3157 log.go:172] (0xc00098d810) (0xc000a746e0) Stream removed, broadcasting: 1\nI0521 00:32:19.250770 3157 log.go:172] (0xc00098d810) (0xc0005d8500) Stream removed, broadcasting: 3\nI0521 00:32:19.250783 3157 log.go:172] (0xc00098d810) (0xc0005d8a00) Stream removed, broadcasting: 5\n" May 21 00:32:19.254: INFO: stdout: "" May 21 00:32:19.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5062 execpod-affinityjzdbv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.97.154:80/ ; done' May 21 00:32:19.634: INFO: stderr: "I0521 00:32:19.398599 3175 log.go:172] (0xc000c04c60) (0xc0006e9c20) Create stream\nI0521 00:32:19.398653 3175 log.go:172] (0xc000c04c60) (0xc0006e9c20) Stream added, broadcasting: 1\nI0521 00:32:19.401630 3175 log.go:172] (0xc000c04c60) Reply frame received for 1\nI0521 00:32:19.401679 3175 log.go:172] (0xc000c04c60) (0xc00068efa0) Create stream\nI0521 00:32:19.401701 3175 log.go:172] (0xc000c04c60) (0xc00068efa0) Stream added, broadcasting: 3\nI0521 00:32:19.402869 3175 log.go:172] (0xc000c04c60) Reply frame received for 3\nI0521 00:32:19.402909 3175 log.go:172] (0xc000c04c60) (0xc00049da40) Create stream\nI0521 00:32:19.402924 3175 log.go:172] (0xc000c04c60) (0xc00049da40) Stream added, broadcasting: 5\nI0521 00:32:19.404060 3175 log.go:172] (0xc000c04c60) Reply frame received for 5\nI0521 00:32:19.466105 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.466143 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.466160 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.466185 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.466199 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.466208 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.535259 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.535291 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.535311 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.536488 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.536576 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.536606 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.536645 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.536672 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.536692 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.547982 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.548011 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.548031 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.548409 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.548433 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.548440 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.548450 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.548455 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.548459 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.554851 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.554868 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.554885 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.555406 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.555438 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.555453 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.555477 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.555491 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.555509 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.561080 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.561095 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.561104 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.561783 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.561816 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.561831 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.561850 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.561859 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.561872 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.561886 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.561902 3175 log.go:172] (0xc00049da40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.561933 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.565926 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.565939 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.565951 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.566794 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.566822 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.566855 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.566872 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.566900 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.566916 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.570825 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.570838 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.570846 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.571346 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.571389 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.571414 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.571452 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.571475 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.571491 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.575694 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.575724 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.575763 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.576244 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.576277 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.576313 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.576330 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.576353 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.576368 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.580628 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.580655 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.580676 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.581366 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.581393 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.581426 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.581449 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.581459 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.581469 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.585793 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.585815 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.585836 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.586367 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.586411 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.586442 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.586472 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.586482 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.586506 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.591174 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.591200 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.591218 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.592052 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.592073 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.592090 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.592135 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.592158 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.592176 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.596151 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.596174 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.596205 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.596631 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.596668 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.596679 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.596693 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.596701 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.596716 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.596722 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.596726 3175 log.go:172] (0xc00049da40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.596746 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.601534 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.601562 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.601590 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.602228 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.602241 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.602248 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.602252 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.602256 3175 log.go:172] (0xc00049da40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.602269 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.602286 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.602314 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.602342 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.607787 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.607808 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.607819 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.608261 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.608275 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.608287 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.608308 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.608325 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.608337 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.608348 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.608358 3175 log.go:172] (0xc00049da40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.608384 3175 log.go:172] (0xc00049da40) (5) Data frame sent\nI0521 00:32:19.615176 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.615199 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.615226 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.615952 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.615963 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.615970 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.615981 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.615988 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.615994 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.621360 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.621386 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.621399 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.621991 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.622002 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.622009 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.622041 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.622065 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.622096 3175 log.go:172] (0xc00049da40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.625827 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.625839 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.625847 3175 log.go:172] (0xc00068efa0) (3) Data frame sent\nI0521 00:32:19.626477 3175 log.go:172] (0xc000c04c60) Data frame received for 3\nI0521 00:32:19.626503 3175 log.go:172] (0xc00068efa0) (3) Data frame handling\nI0521 00:32:19.626518 3175 log.go:172] (0xc000c04c60) Data frame received for 5\nI0521 00:32:19.626530 3175 log.go:172] (0xc00049da40) (5) Data frame handling\nI0521 00:32:19.628138 3175 log.go:172] (0xc000c04c60) Data frame received for 1\nI0521 00:32:19.628154 3175 log.go:172] (0xc0006e9c20) (1) Data frame handling\nI0521 00:32:19.628165 3175 log.go:172] (0xc0006e9c20) (1) Data frame sent\nI0521 00:32:19.628177 3175 log.go:172] (0xc000c04c60) (0xc0006e9c20) Stream removed, broadcasting: 1\nI0521 00:32:19.628200 3175 log.go:172] (0xc000c04c60) Go away received\nI0521 00:32:19.628586 3175 log.go:172] (0xc000c04c60) (0xc0006e9c20) Stream removed, broadcasting: 1\nI0521 00:32:19.628600 3175 log.go:172] (0xc000c04c60) (0xc00068efa0) Stream removed, broadcasting: 3\nI0521 00:32:19.628607 3175 log.go:172] (0xc000c04c60) (0xc00049da40) Stream removed, broadcasting: 5\n" May 21 00:32:19.634: INFO: stdout: "\naffinity-clusterip-transition-ftdp7\naffinity-clusterip-transition-s5k2m\naffinity-clusterip-transition-s5k2m\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-s5k2m\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-s5k2m\naffinity-clusterip-transition-ftdp7\naffinity-clusterip-transition-s5k2m\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-s5k2m" May 21 00:32:19.634: INFO: Received response from host: May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-ftdp7 May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-s5k2m May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-s5k2m May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-s5k2m May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-s5k2m May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-ftdp7 May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-s5k2m May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.634: INFO: Received response from host: affinity-clusterip-transition-s5k2m May 21 00:32:19.646: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5062 execpod-affinityjzdbv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.97.97.154:80/ ; done' May 21 00:32:19.977: INFO: stderr: "I0521 00:32:19.805697 3198 log.go:172] (0xc000b9d290) (0xc000adc500) Create stream\nI0521 00:32:19.805762 3198 log.go:172] (0xc000b9d290) (0xc000adc500) Stream added, broadcasting: 1\nI0521 00:32:19.810735 3198 log.go:172] (0xc000b9d290) Reply frame received for 1\nI0521 00:32:19.810776 3198 log.go:172] (0xc000b9d290) (0xc000adc000) Create stream\nI0521 00:32:19.810788 3198 log.go:172] (0xc000b9d290) (0xc000adc000) Stream added, broadcasting: 3\nI0521 00:32:19.811703 3198 log.go:172] (0xc000b9d290) Reply frame received for 3\nI0521 00:32:19.811749 3198 log.go:172] (0xc000b9d290) (0xc000a84000) Create stream\nI0521 00:32:19.811764 3198 log.go:172] (0xc000b9d290) (0xc000a84000) Stream added, broadcasting: 5\nI0521 00:32:19.812692 3198 log.go:172] (0xc000b9d290) Reply frame received for 5\nI0521 00:32:19.888630 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.888664 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.888674 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.888694 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.888702 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.888710 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.893628 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.893667 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.893689 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.894120 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.894150 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.894162 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.894175 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.894184 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.894195 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.898100 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.898117 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.898135 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.898669 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.898680 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.898686 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.898697 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.898701 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.898706 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.902203 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.902225 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.902243 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.902669 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.902702 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.902714 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.902727 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.902734 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.902746 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.906102 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.906123 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.906142 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.906572 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.906591 3198 log.go:172] (0xc000a84000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.906617 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.906643 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.906662 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.906682 3198 log.go:172] (0xc000a84000) (5) Data frame sent\nI0521 00:32:19.910114 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.910131 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.910150 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.910755 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.910773 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.910783 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.910843 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.910866 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.910883 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.916634 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.916657 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.916693 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.917414 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.917530 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.917559 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.917579 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.917609 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.917636 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.922155 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.922175 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.922189 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.923148 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.923178 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.923208 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.923224 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.923244 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.923259 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.928780 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.928799 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.928814 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.929695 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.929718 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.929730 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.929761 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.929778 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.929794 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.934682 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.934701 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.934727 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.935181 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.935209 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.935221 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.935238 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.935255 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.935265 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.942428 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.942454 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.942472 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.942506 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.942534 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.942551 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.942570 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.942589 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.942612 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.946660 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.946694 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.946728 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.947277 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.947304 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.947315 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.947332 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.947347 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.947356 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.951300 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.951318 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.951333 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.951663 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.951694 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.951708 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.951734 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.951743 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.951758 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.956240 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.956277 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.956292 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.956943 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.956971 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.956996 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.957018 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.957029 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.957045 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.960392 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.960407 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.960415 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.960792 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.960810 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.960818 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.960828 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.960834 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.960840 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.965385 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.965400 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.965412 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.965759 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.965785 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.965804 3198 log.go:172] (0xc000a84000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.97.97.154:80/\nI0521 00:32:19.965862 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.965874 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.965885 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.969611 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.969629 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.969646 3198 log.go:172] (0xc000adc000) (3) Data frame sent\nI0521 00:32:19.970224 3198 log.go:172] (0xc000b9d290) Data frame received for 5\nI0521 00:32:19.970250 3198 log.go:172] (0xc000a84000) (5) Data frame handling\nI0521 00:32:19.970344 3198 log.go:172] (0xc000b9d290) Data frame received for 3\nI0521 00:32:19.970364 3198 log.go:172] (0xc000adc000) (3) Data frame handling\nI0521 00:32:19.971878 3198 log.go:172] (0xc000b9d290) Data frame received for 1\nI0521 00:32:19.971917 3198 log.go:172] (0xc000adc500) (1) Data frame handling\nI0521 00:32:19.971958 3198 log.go:172] (0xc000adc500) (1) Data frame sent\nI0521 00:32:19.971978 3198 log.go:172] (0xc000b9d290) (0xc000adc500) Stream removed, broadcasting: 1\nI0521 00:32:19.971994 3198 log.go:172] (0xc000b9d290) Go away received\nI0521 00:32:19.972337 3198 log.go:172] (0xc000b9d290) (0xc000adc500) Stream removed, broadcasting: 1\nI0521 00:32:19.972355 3198 log.go:172] (0xc000b9d290) (0xc000adc000) Stream removed, broadcasting: 3\nI0521 00:32:19.972364 3198 log.go:172] (0xc000b9d290) (0xc000a84000) Stream removed, broadcasting: 5\n" May 21 00:32:19.977: INFO: stdout: "\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr\naffinity-clusterip-transition-vrxrr" May 21 00:32:19.977: INFO: Received response from host: May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.977: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.978: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.978: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.978: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.978: INFO: Received response from host: affinity-clusterip-transition-vrxrr May 21 00:32:19.978: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-5062, will wait for the garbage collector to delete the pods May 21 00:32:20.177: INFO: Deleting ReplicationController affinity-clusterip-transition took: 90.287395ms May 21 00:32:20.678: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.262027ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:32:35.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5062" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:30.642 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":159,"skipped":2734,"failed":0} [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:32:35.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 21 00:32:39.426: INFO: Pod pod-hostip-27040ad1-adf0-4894-9373-0f677be11594 has hostIP: 172.17.0.12 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:32:39.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2552" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":160,"skipped":2734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:32:39.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 00:32:39.522: INFO: Waiting up to 5m0s for pod "downward-api-380db923-a21e-4137-8476-e1ad6835ca6c" in namespace "downward-api-7905" to be "Succeeded or Failed" May 21 00:32:39.547: INFO: Pod "downward-api-380db923-a21e-4137-8476-e1ad6835ca6c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.306987ms May 21 00:32:41.552: INFO: Pod "downward-api-380db923-a21e-4137-8476-e1ad6835ca6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029955919s May 21 00:32:43.556: INFO: Pod "downward-api-380db923-a21e-4137-8476-e1ad6835ca6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03432604s STEP: Saw pod success May 21 00:32:43.556: INFO: Pod "downward-api-380db923-a21e-4137-8476-e1ad6835ca6c" satisfied condition "Succeeded or Failed" May 21 00:32:43.560: INFO: Trying to get logs from node latest-worker pod downward-api-380db923-a21e-4137-8476-e1ad6835ca6c container dapi-container: STEP: delete the pod May 21 00:32:43.743: INFO: Waiting for pod downward-api-380db923-a21e-4137-8476-e1ad6835ca6c to disappear May 21 00:32:43.774: INFO: Pod downward-api-380db923-a21e-4137-8476-e1ad6835ca6c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:32:43.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7905" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2786,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:32:43.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5750 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 21 00:32:44.003: INFO: Found 0 stateful pods, waiting for 3 May 21 00:32:54.008: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 00:32:54.008: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 00:32:54.008: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 21 00:33:04.020: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 00:33:04.020: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 00:33:04.020: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 21 00:33:04.049: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 21 00:33:14.117: INFO: Updating stateful set ss2 May 21 00:33:14.164: INFO: Waiting for Pod statefulset-5750/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 21 00:33:24.832: INFO: Found 2 stateful pods, waiting for 3 May 21 00:33:34.836: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 21 00:33:34.836: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 21 00:33:34.836: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 21 00:33:34.857: INFO: Updating stateful set ss2 May 21 00:33:34.937: INFO: Waiting for Pod statefulset-5750/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 00:33:44.960: INFO: Updating stateful set ss2 May 21 00:33:45.026: INFO: Waiting for StatefulSet statefulset-5750/ss2 to complete update May 21 00:33:45.026: INFO: Waiting for Pod statefulset-5750/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 21 00:33:55.033: INFO: Waiting for StatefulSet statefulset-5750/ss2 to complete update May 21 00:33:55.033: INFO: Waiting for Pod statefulset-5750/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 00:34:05.033: INFO: Deleting all statefulset in ns statefulset-5750 May 21 00:34:05.036: INFO: Scaling statefulset ss2 to 0 May 21 00:34:25.051: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:34:25.053: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:34:25.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5750" for this suite. • [SLOW TEST:101.296 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":162,"skipped":2802,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:34:25.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 21 00:34:25.180: INFO: Waiting up to 5m0s for pod "client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd" in namespace "containers-7162" to be "Succeeded or Failed" May 21 00:34:25.210: INFO: Pod "client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.341745ms May 21 00:34:27.215: INFO: Pod "client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034578035s May 21 00:34:29.224: INFO: Pod "client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043470774s STEP: Saw pod success May 21 00:34:29.224: INFO: Pod "client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd" satisfied condition "Succeeded or Failed" May 21 00:34:29.227: INFO: Trying to get logs from node latest-worker pod client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd container test-container: STEP: delete the pod May 21 00:34:29.251: INFO: Waiting for pod client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd to disappear May 21 00:34:29.256: INFO: Pod client-containers-26e6fb8e-c1c3-4499-96a0-3b082b20ffcd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:34:29.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7162" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2841,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:34:29.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8382/configmap-test-5bd5de78-7b9d-42da-91ef-e30cfb3d371f STEP: Creating a pod to test consume configMaps May 21 00:34:29.390: INFO: Waiting up to 5m0s for pod "pod-configmaps-22b98018-404f-421f-90ad-212307983009" in namespace "configmap-8382" to be "Succeeded or Failed" May 21 00:34:29.394: INFO: Pod "pod-configmaps-22b98018-404f-421f-90ad-212307983009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.250155ms May 21 00:34:31.410: INFO: Pod "pod-configmaps-22b98018-404f-421f-90ad-212307983009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020020992s May 21 00:34:33.413: INFO: Pod "pod-configmaps-22b98018-404f-421f-90ad-212307983009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023714998s STEP: Saw pod success May 21 00:34:33.414: INFO: Pod "pod-configmaps-22b98018-404f-421f-90ad-212307983009" satisfied condition "Succeeded or Failed" May 21 00:34:33.416: INFO: Trying to get logs from node latest-worker pod pod-configmaps-22b98018-404f-421f-90ad-212307983009 container env-test: STEP: delete the pod May 21 00:34:33.438: INFO: Waiting for pod pod-configmaps-22b98018-404f-421f-90ad-212307983009 to disappear May 21 00:34:33.464: INFO: Pod pod-configmaps-22b98018-404f-421f-90ad-212307983009 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:34:33.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8382" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":164,"skipped":2842,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:34:33.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 00:34:33.645: INFO: Waiting up to 5m0s for pod "downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21" in namespace "downward-api-7044" to be "Succeeded or Failed" May 21 00:34:33.670: INFO: Pod "downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21": Phase="Pending", Reason="", readiness=false. Elapsed: 24.722885ms May 21 00:34:35.673: INFO: Pod "downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027943567s May 21 00:34:37.677: INFO: Pod "downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031412276s STEP: Saw pod success May 21 00:34:37.677: INFO: Pod "downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21" satisfied condition "Succeeded or Failed" May 21 00:34:37.680: INFO: Trying to get logs from node latest-worker pod downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21 container dapi-container: STEP: delete the pod May 21 00:34:37.744: INFO: Waiting for pod downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21 to disappear May 21 00:34:37.754: INFO: Pod downward-api-5d3eea09-0f35-440d-b0a5-7b873efdfd21 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:34:37.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7044" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2847,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:34:37.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 21 00:34:37.800: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 21 00:34:49.489: INFO: >>> kubeConfig: /root/.kube/config May 21 00:34:51.444: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:35:02.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4112" for this suite. • [SLOW TEST:24.467 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":166,"skipped":2857,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:35:02.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 00:35:02.324: INFO: PodSpec: initContainers in spec.initContainers May 21 00:35:54.943: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-62af1c6d-9743-46ae-adcb-5e34a2f4ee5a", GenerateName:"", Namespace:"init-container-2490", SelfLink:"/api/v1/namespaces/init-container-2490/pods/pod-init-62af1c6d-9743-46ae-adcb-5e34a2f4ee5a", UID:"782c33e6-6a59-4670-8cee-18d4bf4f471a", ResourceVersion:"6360549", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725618102, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"324167497"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0021b6e60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b6ea0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0021b6ee0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0021b6f20)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-sg84j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00626a6c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sg84j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sg84j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-sg84j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024e4c08), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002468310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024e4c90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024e4cb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024e4cb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024e4cbc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618102, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618102, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618102, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618102, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.190", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.190"}}, StartTime:(*v1.Time)(0xc0021b6f40), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024683f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002468460)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b2e9e31fa4a3bbe68217874b2661c994c58d326250de43bd98962027a8ddd7a9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021b6f80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021b6f60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0024e4dbf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:35:54.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2490" for this suite. • [SLOW TEST:52.720 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":167,"skipped":2860,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:35:54.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b May 21 00:35:55.042: INFO: Pod name my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b: Found 0 pods out of 1 May 21 00:36:00.046: INFO: Pod name my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b: Found 1 pods out of 1 May 21 00:36:00.046: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b" are running May 21 00:36:00.050: INFO: Pod "my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b-xt5bf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-21 00:35:55 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-21 00:35:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-21 00:35:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-21 00:35:55 +0000 UTC Reason: Message:}]) May 21 00:36:00.050: INFO: Trying to dial the pod May 21 00:36:05.063: INFO: Controller my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b: Got expected result from replica 1 [my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b-xt5bf]: "my-hostname-basic-03bc2e0b-7fc2-41b0-8fc2-7485935ad60b-xt5bf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:36:05.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2807" for this suite. • [SLOW TEST:10.120 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":168,"skipped":2881,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:36:05.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 00:36:05.161: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:36:12.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6696" for this suite. • [SLOW TEST:7.888 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":169,"skipped":2884,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:36:12.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:36:13.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1" in namespace "projected-6586" to be "Succeeded or Failed" May 21 00:36:13.057: INFO: Pod "downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.983643ms May 21 00:36:15.061: INFO: Pod "downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021434923s May 21 00:36:17.065: INFO: Pod "downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025901155s STEP: Saw pod success May 21 00:36:17.065: INFO: Pod "downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1" satisfied condition "Succeeded or Failed" May 21 00:36:17.069: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1 container client-container: STEP: delete the pod May 21 00:36:17.139: INFO: Waiting for pod downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1 to disappear May 21 00:36:17.164: INFO: Pod downwardapi-volume-8a5ed192-20fa-4722-8f2d-329dae0916e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:36:17.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6586" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":170,"skipped":2887,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:36:17.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-4e958b21-da98-45c4-a766-487a1efc0b07 STEP: Creating a pod to test consume configMaps May 21 00:36:17.256: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9" in namespace "configmap-2551" to be "Succeeded or Failed" May 21 00:36:17.275: INFO: Pod "pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.565457ms May 21 00:36:19.278: INFO: Pod "pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022345139s May 21 00:36:21.283: INFO: Pod "pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.026660249s May 21 00:36:23.288: INFO: Pod "pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031405883s STEP: Saw pod success May 21 00:36:23.288: INFO: Pod "pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9" satisfied condition "Succeeded or Failed" May 21 00:36:23.291: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9 container configmap-volume-test: STEP: delete the pod May 21 00:36:23.322: INFO: Waiting for pod pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9 to disappear May 21 00:36:23.337: INFO: Pod pod-configmaps-0a87d815-7f98-4712-8332-014b8e2193c9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:36:23.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2551" for this suite. • [SLOW TEST:6.154 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2899,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:36:23.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-a3ad319d-6182-4e72-990e-4b187aa91ef9 STEP: Creating a pod to test consume secrets May 21 00:36:23.434: INFO: Waiting up to 5m0s for pod "pod-secrets-a48422ca-1239-455f-948e-277a822b1107" in namespace "secrets-8994" to be "Succeeded or Failed" May 21 00:36:23.438: INFO: Pod "pod-secrets-a48422ca-1239-455f-948e-277a822b1107": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21284ms May 21 00:36:25.443: INFO: Pod "pod-secrets-a48422ca-1239-455f-948e-277a822b1107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008718845s May 21 00:36:27.447: INFO: Pod "pod-secrets-a48422ca-1239-455f-948e-277a822b1107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012685452s STEP: Saw pod success May 21 00:36:27.447: INFO: Pod "pod-secrets-a48422ca-1239-455f-948e-277a822b1107" satisfied condition "Succeeded or Failed" May 21 00:36:27.450: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a48422ca-1239-455f-948e-277a822b1107 container secret-volume-test: STEP: delete the pod May 21 00:36:27.495: INFO: Waiting for pod pod-secrets-a48422ca-1239-455f-948e-277a822b1107 to disappear May 21 00:36:27.510: INFO: Pod pod-secrets-a48422ca-1239-455f-948e-277a822b1107 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:36:27.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8994" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":172,"skipped":2912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:36:27.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 21 00:36:35.668: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 00:36:35.716: INFO: Pod pod-with-prestop-http-hook still exists May 21 00:36:37.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 00:36:37.723: INFO: Pod pod-with-prestop-http-hook still exists May 21 00:36:39.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 00:36:39.721: INFO: Pod pod-with-prestop-http-hook still exists May 21 00:36:41.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 00:36:41.720: INFO: Pod pod-with-prestop-http-hook still exists May 21 00:36:43.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 00:36:43.721: INFO: Pod pod-with-prestop-http-hook still exists May 21 00:36:45.717: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 21 00:36:45.721: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:36:45.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-186" for this suite. • [SLOW TEST:18.233 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2935,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:36:45.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:36:46.538: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:36:48.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618206, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618206, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618206, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618206, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:36:51.649: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:37:01.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5220" for this suite. STEP: Destroying namespace "webhook-5220-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.257 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":174,"skipped":2944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:37:02.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 21 00:37:02.087: INFO: Waiting up to 5m0s for pod "pod-14bf7b40-d506-467c-a474-bbc47165fc7b" in namespace "emptydir-2859" to be "Succeeded or Failed" May 21 00:37:02.099: INFO: Pod "pod-14bf7b40-d506-467c-a474-bbc47165fc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.004805ms May 21 00:37:04.103: INFO: Pod "pod-14bf7b40-d506-467c-a474-bbc47165fc7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016275276s May 21 00:37:06.107: INFO: Pod "pod-14bf7b40-d506-467c-a474-bbc47165fc7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020647275s STEP: Saw pod success May 21 00:37:06.107: INFO: Pod "pod-14bf7b40-d506-467c-a474-bbc47165fc7b" satisfied condition "Succeeded or Failed" May 21 00:37:06.114: INFO: Trying to get logs from node latest-worker pod pod-14bf7b40-d506-467c-a474-bbc47165fc7b container test-container: STEP: delete the pod May 21 00:37:06.154: INFO: Waiting for pod pod-14bf7b40-d506-467c-a474-bbc47165fc7b to disappear May 21 00:37:06.164: INFO: Pod pod-14bf7b40-d506-467c-a474-bbc47165fc7b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:37:06.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2859" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":2971,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:37:06.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-10f00aa5-39ab-4f61-9fae-4d3c5480965a STEP: Creating a pod to test consume secrets May 21 00:37:06.508: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70" in namespace "projected-7495" to be "Succeeded or Failed" May 21 00:37:06.524: INFO: Pod "pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70": Phase="Pending", Reason="", readiness=false. Elapsed: 15.929023ms May 21 00:37:08.528: INFO: Pod "pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019989367s May 21 00:37:10.533: INFO: Pod "pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024579374s STEP: Saw pod success May 21 00:37:10.533: INFO: Pod "pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70" satisfied condition "Succeeded or Failed" May 21 00:37:10.536: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70 container projected-secret-volume-test: STEP: delete the pod May 21 00:37:10.574: INFO: Waiting for pod pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70 to disappear May 21 00:37:10.579: INFO: Pod pod-projected-secrets-5eff7949-55b2-4449-b1ee-89b5cb213f70 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:37:10.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7495" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2974,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:37:10.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 21 00:37:10.730: INFO: Waiting up to 5m0s for pod "pod-6f8ff11c-6263-4afc-932c-63d39a68243c" in namespace "emptydir-2449" to be "Succeeded or Failed" May 21 00:37:10.736: INFO: Pod "pod-6f8ff11c-6263-4afc-932c-63d39a68243c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.828318ms May 21 00:37:12.754: INFO: Pod "pod-6f8ff11c-6263-4afc-932c-63d39a68243c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024263522s May 21 00:37:14.764: INFO: Pod "pod-6f8ff11c-6263-4afc-932c-63d39a68243c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034568906s STEP: Saw pod success May 21 00:37:14.765: INFO: Pod "pod-6f8ff11c-6263-4afc-932c-63d39a68243c" satisfied condition "Succeeded or Failed" May 21 00:37:14.768: INFO: Trying to get logs from node latest-worker2 pod pod-6f8ff11c-6263-4afc-932c-63d39a68243c container test-container: STEP: delete the pod May 21 00:37:14.799: INFO: Waiting for pod pod-6f8ff11c-6263-4afc-932c-63d39a68243c to disappear May 21 00:37:14.826: INFO: Pod pod-6f8ff11c-6263-4afc-932c-63d39a68243c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:37:14.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2449" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":2981,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:37:14.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:37:15.633: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:37:17.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618235, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618235, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618235, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618235, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:37:20.677: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:37:20.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7918" for this suite. STEP: Destroying namespace "webhook-7918-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.076 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":178,"skipped":2993,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:37:20.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:37:25.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5875" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":3026,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:37:25.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-62wx STEP: Creating a pod to test atomic-volume-subpath May 21 00:37:25.212: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-62wx" in namespace "subpath-586" to be "Succeeded or Failed" May 21 00:37:25.215: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Pending", Reason="", readiness=false. Elapsed: 3.130367ms May 21 00:37:27.219: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006611947s May 21 00:37:29.327: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 4.115129632s May 21 00:37:31.332: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 6.119858307s May 21 00:37:33.337: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 8.124597639s May 21 00:37:35.342: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 10.129628866s May 21 00:37:37.345: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 12.133181228s May 21 00:37:39.348: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 14.136519355s May 21 00:37:41.353: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 16.140947708s May 21 00:37:43.357: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 18.14536439s May 21 00:37:45.361: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 20.148661953s May 21 00:37:47.366: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Running", Reason="", readiness=true. Elapsed: 22.153570082s May 21 00:37:49.370: INFO: Pod "pod-subpath-test-projected-62wx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.157610502s STEP: Saw pod success May 21 00:37:49.370: INFO: Pod "pod-subpath-test-projected-62wx" satisfied condition "Succeeded or Failed" May 21 00:37:49.372: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-62wx container test-container-subpath-projected-62wx: STEP: delete the pod May 21 00:37:49.414: INFO: Waiting for pod pod-subpath-test-projected-62wx to disappear May 21 00:37:49.501: INFO: Pod pod-subpath-test-projected-62wx no longer exists STEP: Deleting pod pod-subpath-test-projected-62wx May 21 00:37:49.501: INFO: Deleting pod "pod-subpath-test-projected-62wx" in namespace "subpath-586" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:37:49.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-586" for this suite. • [SLOW TEST:24.414 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":180,"skipped":3038,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:37:49.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9729 STEP: creating a selector STEP: Creating the service pods in kubernetes May 21 00:37:49.568: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 21 00:37:49.636: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 00:37:51.639: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 00:37:53.640: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:37:55.640: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:37:57.640: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:37:59.639: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:38:01.640: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:38:03.640: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:38:05.640: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:38:07.639: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:38:09.639: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:38:11.640: INFO: The status of Pod netserver-0 is Running (Ready = true) May 21 00:38:11.654: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 21 00:38:15.740: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.197 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9729 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:38:15.740: INFO: >>> kubeConfig: /root/.kube/config I0521 00:38:15.779980 8 log.go:172] (0xc001d120b0) (0xc002a54640) Create stream I0521 00:38:15.780005 8 log.go:172] (0xc001d120b0) (0xc002a54640) Stream added, broadcasting: 1 I0521 00:38:15.781853 8 log.go:172] (0xc001d120b0) Reply frame received for 1 I0521 00:38:15.781900 8 log.go:172] (0xc001d120b0) (0xc0025bc0a0) Create stream I0521 00:38:15.781914 8 log.go:172] (0xc001d120b0) (0xc0025bc0a0) Stream added, broadcasting: 3 I0521 00:38:15.783183 8 log.go:172] (0xc001d120b0) Reply frame received for 3 I0521 00:38:15.783219 8 log.go:172] (0xc001d120b0) (0xc0025bc140) Create stream I0521 00:38:15.783233 8 log.go:172] (0xc001d120b0) (0xc0025bc140) Stream added, broadcasting: 5 I0521 00:38:15.784253 8 log.go:172] (0xc001d120b0) Reply frame received for 5 I0521 00:38:16.853438 8 log.go:172] (0xc001d120b0) Data frame received for 5 I0521 00:38:16.853485 8 log.go:172] (0xc001d120b0) Data frame received for 3 I0521 00:38:16.853635 8 log.go:172] (0xc0025bc0a0) (3) Data frame handling I0521 00:38:16.853661 8 log.go:172] (0xc0025bc0a0) (3) Data frame sent I0521 00:38:16.853672 8 log.go:172] (0xc001d120b0) Data frame received for 3 I0521 00:38:16.853686 8 log.go:172] (0xc0025bc0a0) (3) Data frame handling I0521 00:38:16.853741 8 log.go:172] (0xc0025bc140) (5) Data frame handling I0521 00:38:16.855690 8 log.go:172] (0xc001d120b0) Data frame received for 1 I0521 00:38:16.855732 8 log.go:172] (0xc002a54640) (1) Data frame handling I0521 00:38:16.855760 8 log.go:172] (0xc002a54640) (1) Data frame sent I0521 00:38:16.855782 8 log.go:172] (0xc001d120b0) (0xc002a54640) Stream removed, broadcasting: 1 I0521 00:38:16.855801 8 log.go:172] (0xc001d120b0) Go away received I0521 00:38:16.855949 8 log.go:172] (0xc001d120b0) (0xc002a54640) Stream removed, broadcasting: 1 I0521 00:38:16.855974 8 log.go:172] (0xc001d120b0) (0xc0025bc0a0) Stream removed, broadcasting: 3 I0521 00:38:16.856001 8 log.go:172] (0xc001d120b0) (0xc0025bc140) Stream removed, broadcasting: 5 May 21 00:38:16.856: INFO: Found all expected endpoints: [netserver-0] May 21 00:38:16.859: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.214 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9729 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:38:16.860: INFO: >>> kubeConfig: /root/.kube/config I0521 00:38:16.889508 8 log.go:172] (0xc002370840) (0xc00124c000) Create stream I0521 00:38:16.889529 8 log.go:172] (0xc002370840) (0xc00124c000) Stream added, broadcasting: 1 I0521 00:38:16.891283 8 log.go:172] (0xc002370840) Reply frame received for 1 I0521 00:38:16.891337 8 log.go:172] (0xc002370840) (0xc001348000) Create stream I0521 00:38:16.891354 8 log.go:172] (0xc002370840) (0xc001348000) Stream added, broadcasting: 3 I0521 00:38:16.892355 8 log.go:172] (0xc002370840) Reply frame received for 3 I0521 00:38:16.892393 8 log.go:172] (0xc002370840) (0xc0020bcaa0) Create stream I0521 00:38:16.892408 8 log.go:172] (0xc002370840) (0xc0020bcaa0) Stream added, broadcasting: 5 I0521 00:38:16.893641 8 log.go:172] (0xc002370840) Reply frame received for 5 I0521 00:38:17.979968 8 log.go:172] (0xc002370840) Data frame received for 3 I0521 00:38:17.980024 8 log.go:172] (0xc001348000) (3) Data frame handling I0521 00:38:17.980048 8 log.go:172] (0xc001348000) (3) Data frame sent I0521 00:38:17.980258 8 log.go:172] (0xc002370840) Data frame received for 5 I0521 00:38:17.980280 8 log.go:172] (0xc0020bcaa0) (5) Data frame handling I0521 00:38:17.980310 8 log.go:172] (0xc002370840) Data frame received for 3 I0521 00:38:17.980414 8 log.go:172] (0xc001348000) (3) Data frame handling I0521 00:38:17.982452 8 log.go:172] (0xc002370840) Data frame received for 1 I0521 00:38:17.982516 8 log.go:172] (0xc00124c000) (1) Data frame handling I0521 00:38:17.982546 8 log.go:172] (0xc00124c000) (1) Data frame sent I0521 00:38:17.982569 8 log.go:172] (0xc002370840) (0xc00124c000) Stream removed, broadcasting: 1 I0521 00:38:17.982667 8 log.go:172] (0xc002370840) Go away received I0521 00:38:17.982737 8 log.go:172] (0xc002370840) (0xc00124c000) Stream removed, broadcasting: 1 I0521 00:38:17.982782 8 log.go:172] (0xc002370840) (0xc001348000) Stream removed, broadcasting: 3 I0521 00:38:17.982799 8 log.go:172] (0xc002370840) (0xc0020bcaa0) Stream removed, broadcasting: 5 May 21 00:38:17.982: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:38:17.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9729" for this suite. • [SLOW TEST:28.482 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":181,"skipped":3038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:38:17.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-nwqq STEP: Creating a pod to test atomic-volume-subpath May 21 00:38:18.134: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nwqq" in namespace "subpath-8911" to be "Succeeded or Failed" May 21 00:38:18.146: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.479513ms May 21 00:38:20.150: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015631453s May 21 00:38:22.154: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 4.019970996s May 21 00:38:24.244: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 6.110013338s May 21 00:38:26.248: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 8.114177334s May 21 00:38:28.253: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 10.118794567s May 21 00:38:30.257: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 12.123437767s May 21 00:38:32.262: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 14.127914798s May 21 00:38:34.267: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 16.132728108s May 21 00:38:36.271: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 18.137421574s May 21 00:38:38.275: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 20.140865794s May 21 00:38:40.278: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Running", Reason="", readiness=true. Elapsed: 22.144452112s May 21 00:38:42.283: INFO: Pod "pod-subpath-test-configmap-nwqq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.148656501s STEP: Saw pod success May 21 00:38:42.283: INFO: Pod "pod-subpath-test-configmap-nwqq" satisfied condition "Succeeded or Failed" May 21 00:38:42.285: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-nwqq container test-container-subpath-configmap-nwqq: STEP: delete the pod May 21 00:38:42.318: INFO: Waiting for pod pod-subpath-test-configmap-nwqq to disappear May 21 00:38:42.329: INFO: Pod pod-subpath-test-configmap-nwqq no longer exists STEP: Deleting pod pod-subpath-test-configmap-nwqq May 21 00:38:42.329: INFO: Deleting pod "pod-subpath-test-configmap-nwqq" in namespace "subpath-8911" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:38:42.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8911" for this suite. • [SLOW TEST:24.372 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":182,"skipped":3082,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:38:42.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:38:42.439: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 21 00:38:45.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1575 create -f -' May 21 00:38:49.320: INFO: stderr: "" May 21 00:38:49.320: INFO: stdout: "e2e-test-crd-publish-openapi-6831-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 21 00:38:49.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1575 delete e2e-test-crd-publish-openapi-6831-crds test-cr' May 21 00:38:49.425: INFO: stderr: "" May 21 00:38:49.425: INFO: stdout: "e2e-test-crd-publish-openapi-6831-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 21 00:38:49.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1575 apply -f -' May 21 00:38:49.727: INFO: stderr: "" May 21 00:38:49.728: INFO: stdout: "e2e-test-crd-publish-openapi-6831-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 21 00:38:49.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1575 delete e2e-test-crd-publish-openapi-6831-crds test-cr' May 21 00:38:49.832: INFO: stderr: "" May 21 00:38:49.832: INFO: stdout: "e2e-test-crd-publish-openapi-6831-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 21 00:38:49.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6831-crds' May 21 00:38:50.103: INFO: stderr: "" May 21 00:38:50.103: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6831-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:38:53.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1575" for this suite. • [SLOW TEST:10.682 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":183,"skipped":3082,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:38:53.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 21 00:38:53.602: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 21 00:38:56.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 00:38:58.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618333, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:39:01.107: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:39:01.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:02.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8469" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.302 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":184,"skipped":3091,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:02.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:13.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4798" for this suite. • [SLOW TEST:11.100 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":185,"skipped":3106,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:13.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:24.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3061" for this suite. • [SLOW TEST:11.184 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":186,"skipped":3123,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:24.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9122.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9122.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9122.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9122.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9122.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9122.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 00:39:30.876: INFO: DNS probes using dns-9122/dns-test-901664e8-dfde-4f7d-b11e-f8b13405579e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:30.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9122" for this suite. • [SLOW TEST:6.336 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":187,"skipped":3140,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:30.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-f29dce13-235f-4e1c-8489-b726c54d3151 STEP: Creating a pod to test consume secrets May 21 00:39:31.310: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717" in namespace "projected-6197" to be "Succeeded or Failed" May 21 00:39:31.314: INFO: Pod "pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09359ms May 21 00:39:33.318: INFO: Pod "pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007378703s May 21 00:39:35.322: INFO: Pod "pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011593116s STEP: Saw pod success May 21 00:39:35.322: INFO: Pod "pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717" satisfied condition "Succeeded or Failed" May 21 00:39:35.325: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717 container projected-secret-volume-test: STEP: delete the pod May 21 00:39:35.394: INFO: Waiting for pod pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717 to disappear May 21 00:39:35.409: INFO: Pod pod-projected-secrets-6ed47a08-5de9-4eb2-929d-a01b16d7c717 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:35.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6197" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":188,"skipped":3140,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:35.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:51.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4301" for this suite. • [SLOW TEST:16.230 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":189,"skipped":3141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:51.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 21 00:39:51.767: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2410 /api/v1/namespaces/watch-2410/configmaps/e2e-watch-test-resource-version c8b5ea3e-deaf-4931-8c07-c9fed50a0b1e 6362016 0 2020-05-21 00:39:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-21 00:39:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:39:51.767: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2410 /api/v1/namespaces/watch-2410/configmaps/e2e-watch-test-resource-version c8b5ea3e-deaf-4931-8c07-c9fed50a0b1e 6362017 0 2020-05-21 00:39:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-21 00:39:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:51.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2410" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":190,"skipped":3202,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:51.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:39:51.857: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:39:55.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6252" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":191,"skipped":3207,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:39:55.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0521 00:40:06.040617 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 21 00:40:06.040: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:40:06.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3779" for this suite. • [SLOW TEST:10.146 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":192,"skipped":3209,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:40:06.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:40:22.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6937" for this suite. • [SLOW TEST:16.305 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":193,"skipped":3212,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:40:22.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-4682fe8b-733b-4695-8e93-248e1b43fdfc STEP: Creating a pod to test consume configMaps May 21 00:40:22.421: INFO: Waiting up to 5m0s for pod "pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea" in namespace "configmap-8084" to be "Succeeded or Failed" May 21 00:40:22.508: INFO: Pod "pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 87.17642ms May 21 00:40:24.512: INFO: Pod "pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090976213s May 21 00:40:26.516: INFO: Pod "pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095472978s STEP: Saw pod success May 21 00:40:26.517: INFO: Pod "pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea" satisfied condition "Succeeded or Failed" May 21 00:40:26.521: INFO: Trying to get logs from node latest-worker pod pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea container configmap-volume-test: STEP: delete the pod May 21 00:40:26.613: INFO: Waiting for pod pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea to disappear May 21 00:40:26.617: INFO: Pod pod-configmaps-944f52df-6e1b-4b86-a8ad-40ca1a8aa9ea no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:40:26.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8084" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":194,"skipped":3219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:40:26.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5851.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5851.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5851.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5851.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5851.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5851.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 00:40:34.838: INFO: DNS probes using dns-5851/dns-test-232b95ba-ba8b-4f12-b091-fed00a13bee1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:40:35.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5851" for this suite. • [SLOW TEST:8.482 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":195,"skipped":3282,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:40:35.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4645 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4645;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4645 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4645;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4645.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4645.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4645.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4645.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4645.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4645.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4645.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.44.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.44.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.44.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.44.251_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4645 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4645;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4645 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4645;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4645.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4645.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4645.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4645.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4645.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4645.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4645.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4645.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4645.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 251.44.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.44.251_udp@PTR;check="$$(dig +tcp +noall +answer +search 251.44.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.44.251_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 00:40:41.656: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.659: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.661: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.663: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.666: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.668: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.671: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.674: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.691: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.694: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.697: INFO: Unable to read jessie_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.701: INFO: Unable to read jessie_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.703: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.706: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.708: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:41.721: INFO: Lookups using dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4645 wheezy_tcp@dns-test-service.dns-4645 wheezy_udp@dns-test-service.dns-4645.svc wheezy_tcp@dns-test-service.dns-4645.svc wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4645 jessie_tcp@dns-test-service.dns-4645 jessie_udp@dns-test-service.dns-4645.svc jessie_tcp@dns-test-service.dns-4645.svc jessie_udp@_http._tcp.dns-test-service.dns-4645.svc jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc] May 21 00:40:46.758: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.761: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.764: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.780: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.783: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.787: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.790: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.811: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.813: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.816: INFO: Unable to read jessie_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.831: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.834: INFO: Unable to read jessie_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.838: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.841: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:46.887: INFO: Lookups using dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4645 wheezy_tcp@dns-test-service.dns-4645 wheezy_udp@dns-test-service.dns-4645.svc wheezy_tcp@dns-test-service.dns-4645.svc wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4645 jessie_tcp@dns-test-service.dns-4645 jessie_udp@dns-test-service.dns-4645.svc jessie_tcp@dns-test-service.dns-4645.svc jessie_udp@_http._tcp.dns-test-service.dns-4645.svc jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc] May 21 00:40:51.726: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.730: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.736: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.739: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.741: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.744: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.766: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.768: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.771: INFO: Unable to read jessie_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.774: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.776: INFO: Unable to read jessie_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.778: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.781: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.783: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:51.800: INFO: Lookups using dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4645 wheezy_tcp@dns-test-service.dns-4645 wheezy_udp@dns-test-service.dns-4645.svc wheezy_tcp@dns-test-service.dns-4645.svc wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4645 jessie_tcp@dns-test-service.dns-4645 jessie_udp@dns-test-service.dns-4645.svc jessie_tcp@dns-test-service.dns-4645.svc jessie_udp@_http._tcp.dns-test-service.dns-4645.svc jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc] May 21 00:40:56.726: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.729: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.732: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.735: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.741: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.744: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.768: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.771: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.775: INFO: Unable to read jessie_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.778: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.780: INFO: Unable to read jessie_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.787: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.790: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:40:56.809: INFO: Lookups using dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4645 wheezy_tcp@dns-test-service.dns-4645 wheezy_udp@dns-test-service.dns-4645.svc wheezy_tcp@dns-test-service.dns-4645.svc wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4645 jessie_tcp@dns-test-service.dns-4645 jessie_udp@dns-test-service.dns-4645.svc jessie_tcp@dns-test-service.dns-4645.svc jessie_udp@_http._tcp.dns-test-service.dns-4645.svc jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc] May 21 00:41:01.726: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.730: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.734: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.738: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.741: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.745: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.748: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.751: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.778: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.781: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.784: INFO: Unable to read jessie_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.790: INFO: Unable to read jessie_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.794: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.797: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.799: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:01.818: INFO: Lookups using dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4645 wheezy_tcp@dns-test-service.dns-4645 wheezy_udp@dns-test-service.dns-4645.svc wheezy_tcp@dns-test-service.dns-4645.svc wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4645 jessie_tcp@dns-test-service.dns-4645 jessie_udp@dns-test-service.dns-4645.svc jessie_tcp@dns-test-service.dns-4645.svc jessie_udp@_http._tcp.dns-test-service.dns-4645.svc jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc] May 21 00:41:06.725: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.730: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.736: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.741: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.744: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.766: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.768: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.771: INFO: Unable to read jessie_udp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.774: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645 from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.776: INFO: Unable to read jessie_udp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.779: INFO: Unable to read jessie_tcp@dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.781: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.784: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc from pod dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244: the server could not find the requested resource (get pods dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244) May 21 00:41:06.810: INFO: Lookups using dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4645 wheezy_tcp@dns-test-service.dns-4645 wheezy_udp@dns-test-service.dns-4645.svc wheezy_tcp@dns-test-service.dns-4645.svc wheezy_udp@_http._tcp.dns-test-service.dns-4645.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4645.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4645 jessie_tcp@dns-test-service.dns-4645 jessie_udp@dns-test-service.dns-4645.svc jessie_tcp@dns-test-service.dns-4645.svc jessie_udp@_http._tcp.dns-test-service.dns-4645.svc jessie_tcp@_http._tcp.dns-test-service.dns-4645.svc] May 21 00:41:11.805: INFO: DNS probes using dns-4645/dns-test-10a5b897-85cf-4025-a48c-b28f4dbdf244 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:41:12.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4645" for this suite. • [SLOW TEST:37.463 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":196,"skipped":3295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:41:12.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-564013e3-a496-48b7-9293-9350f2a2edfa in namespace container-probe-4595 May 21 00:41:16.846: INFO: Started pod test-webserver-564013e3-a496-48b7-9293-9350f2a2edfa in namespace container-probe-4595 STEP: checking the pod's current state and verifying that restartCount is present May 21 00:41:16.849: INFO: Initial restart count of pod test-webserver-564013e3-a496-48b7-9293-9350f2a2edfa is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:17.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4595" for this suite. • [SLOW TEST:244.956 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3348,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:17.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:45:17.807: INFO: Waiting up to 5m0s for pod "busybox-user-65534-4b8f98fb-eee2-4cfa-ab71-fab1c604db9d" in namespace "security-context-test-8458" to be "Succeeded or Failed" May 21 00:45:17.951: INFO: Pod "busybox-user-65534-4b8f98fb-eee2-4cfa-ab71-fab1c604db9d": Phase="Pending", Reason="", readiness=false. Elapsed: 143.413251ms May 21 00:45:19.972: INFO: Pod "busybox-user-65534-4b8f98fb-eee2-4cfa-ab71-fab1c604db9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16522411s May 21 00:45:21.976: INFO: Pod "busybox-user-65534-4b8f98fb-eee2-4cfa-ab71-fab1c604db9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.169256559s May 21 00:45:21.976: INFO: Pod "busybox-user-65534-4b8f98fb-eee2-4cfa-ab71-fab1c604db9d" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:21.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8458" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":198,"skipped":3365,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:21.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-3694/secret-test-d634ef0a-e58f-49d1-9281-083c0744aa55 STEP: Creating a pod to test consume secrets May 21 00:45:22.070: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f" in namespace "secrets-3694" to be "Succeeded or Failed" May 21 00:45:22.089: INFO: Pod "pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.191167ms May 21 00:45:24.094: INFO: Pod "pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024216503s May 21 00:45:26.099: INFO: Pod "pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028840238s STEP: Saw pod success May 21 00:45:26.099: INFO: Pod "pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f" satisfied condition "Succeeded or Failed" May 21 00:45:26.102: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f container env-test: STEP: delete the pod May 21 00:45:26.159: INFO: Waiting for pod pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f to disappear May 21 00:45:26.200: INFO: Pod pod-configmaps-0ca0d65d-054c-498c-90aa-9ccdf265da5f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:26.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3694" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":199,"skipped":3371,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:26.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:45:26.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:45:28.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618726, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618726, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618727, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725618726, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:45:31.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:31.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3491" for this suite. STEP: Destroying namespace "webhook-3491-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.846 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":200,"skipped":3375,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:32.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:36.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7040" for this suite. • [SLOW TEST:5.007 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":201,"skipped":3380,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:37.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 00:45:41.784: INFO: Successfully updated pod "labelsupdated7f7e8ba-e9ca-4ee8-b8cb-e7c9ee2a6e6b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:45.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9556" for this suite. • [SLOW TEST:8.753 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":202,"skipped":3397,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:45.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 21 00:45:45.900: INFO: Waiting up to 5m0s for pod "client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df" in namespace "containers-3251" to be "Succeeded or Failed" May 21 00:45:45.903: INFO: Pod "client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040583ms May 21 00:45:48.003: INFO: Pod "client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102363614s May 21 00:45:50.009: INFO: Pod "client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109045186s STEP: Saw pod success May 21 00:45:50.009: INFO: Pod "client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df" satisfied condition "Succeeded or Failed" May 21 00:45:50.020: INFO: Trying to get logs from node latest-worker pod client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df container test-container: STEP: delete the pod May 21 00:45:50.061: INFO: Waiting for pod client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df to disappear May 21 00:45:50.104: INFO: Pod client-containers-dbe11f81-9cc7-48a8-96e0-9295aa16e6df no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:50.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3251" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":203,"skipped":3410,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:50.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:45:50.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 21 00:45:50.326: INFO: stderr: "" May 21 00:45:50.326: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:50.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9258" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":204,"skipped":3415,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:50.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:45:50.405: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:50.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5978" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":205,"skipped":3421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:51.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-88ef96f0-aab7-4319-a16f-049a736dbfda STEP: Creating a pod to test consume configMaps May 21 00:45:51.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94" in namespace "configmap-4975" to be "Succeeded or Failed" May 21 00:45:51.213: INFO: Pod "pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94": Phase="Pending", Reason="", readiness=false. Elapsed: 83.57617ms May 21 00:45:53.218: INFO: Pod "pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08852358s May 21 00:45:55.260: INFO: Pod "pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130393656s May 21 00:45:57.264: INFO: Pod "pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134768659s STEP: Saw pod success May 21 00:45:57.264: INFO: Pod "pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94" satisfied condition "Succeeded or Failed" May 21 00:45:57.268: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94 container configmap-volume-test: STEP: delete the pod May 21 00:45:57.287: INFO: Waiting for pod pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94 to disappear May 21 00:45:57.304: INFO: Pod pod-configmaps-f2c96677-2753-4682-8c7c-7408a31fbb94 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:45:57.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4975" for this suite. • [SLOW TEST:6.311 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":206,"skipped":3450,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:45:57.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e3894c11-3085-46f5-88f5-40b1c9a52c7a STEP: Creating a pod to test consume configMaps May 21 00:45:57.401: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558" in namespace "projected-2113" to be "Succeeded or Failed" May 21 00:45:57.405: INFO: Pod "pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558": Phase="Pending", Reason="", readiness=false. Elapsed: 4.720061ms May 21 00:45:59.410: INFO: Pod "pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008952777s May 21 00:46:01.414: INFO: Pod "pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558": Phase="Running", Reason="", readiness=true. Elapsed: 4.013784436s May 21 00:46:03.419: INFO: Pod "pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0185856s STEP: Saw pod success May 21 00:46:03.419: INFO: Pod "pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558" satisfied condition "Succeeded or Failed" May 21 00:46:03.422: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558 container projected-configmap-volume-test: STEP: delete the pod May 21 00:46:03.460: INFO: Waiting for pod pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558 to disappear May 21 00:46:03.476: INFO: Pod pod-projected-configmaps-b2d3f291-5a2f-4e11-8dfe-34056a24e558 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:46:03.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2113" for this suite. • [SLOW TEST:6.173 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":207,"skipped":3461,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:46:03.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3328.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3328.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3328.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 00:46:09.708: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.710: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.730: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.733: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.743: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.771: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.775: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.779: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:09.785: INFO: Lookups using dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local] May 21 00:46:14.790: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.795: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.799: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.802: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.811: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.813: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.816: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.820: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:14.826: INFO: Lookups using dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local] May 21 00:46:19.824: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.828: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.832: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.835: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.844: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.847: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.850: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.853: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:19.858: INFO: Lookups using dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local] May 21 00:46:24.791: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.795: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.798: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.800: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.808: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.810: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.813: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.815: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:24.820: INFO: Lookups using dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local] May 21 00:46:29.806: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.810: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.814: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.817: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.825: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.828: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.831: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.838: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:29.866: INFO: Lookups using dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local] May 21 00:46:34.791: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.795: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.800: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.803: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.815: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.818: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.821: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.824: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local from pod dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840: the server could not find the requested resource (get pods dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840) May 21 00:46:34.829: INFO: Lookups using dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3328.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3328.svc.cluster.local jessie_udp@dns-test-service-2.dns-3328.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3328.svc.cluster.local] May 21 00:46:39.826: INFO: DNS probes using dns-3328/dns-test-2c21e747-d4ea-4d9c-9aeb-137e79256840 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:46:40.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3328" for this suite. • [SLOW TEST:37.054 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":208,"skipped":3468,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:46:40.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 21 00:46:40.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 21 00:46:40.729: INFO: stderr: "" May 21 00:46:40.729: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:46:40.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-37" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":209,"skipped":3471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:46:40.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-a607b426-c152-46c8-a34b-4cd538329fa0 STEP: Creating a pod to test consume secrets May 21 00:46:40.852: INFO: Waiting up to 5m0s for pod "pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5" in namespace "secrets-49" to be "Succeeded or Failed" May 21 00:46:40.886: INFO: Pod "pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.720074ms May 21 00:46:43.039: INFO: Pod "pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186933945s May 21 00:46:45.043: INFO: Pod "pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.190887879s STEP: Saw pod success May 21 00:46:45.043: INFO: Pod "pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5" satisfied condition "Succeeded or Failed" May 21 00:46:45.046: INFO: Trying to get logs from node latest-worker pod pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5 container secret-volume-test: STEP: delete the pod May 21 00:46:45.155: INFO: Waiting for pod pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5 to disappear May 21 00:46:45.161: INFO: Pod pod-secrets-d33c3d36-2031-4c2a-a33b-a77bab3cc3d5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:46:45.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-49" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3545,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:46:45.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:46:56.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1277" for this suite. • [SLOW TEST:11.207 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":211,"skipped":3551,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:46:56.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 21 00:46:56.445: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:47:10.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4514" for this suite. • [SLOW TEST:14.521 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":212,"skipped":3561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:47:10.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 21 00:47:11.039: INFO: Waiting up to 5m0s for pod "downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff" in namespace "downward-api-7138" to be "Succeeded or Failed" May 21 00:47:11.135: INFO: Pod "downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff": Phase="Pending", Reason="", readiness=false. Elapsed: 96.145085ms May 21 00:47:13.139: INFO: Pod "downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100099017s May 21 00:47:15.168: INFO: Pod "downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128750844s STEP: Saw pod success May 21 00:47:15.168: INFO: Pod "downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff" satisfied condition "Succeeded or Failed" May 21 00:47:15.171: INFO: Trying to get logs from node latest-worker2 pod downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff container dapi-container: STEP: delete the pod May 21 00:47:15.192: INFO: Waiting for pod downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff to disappear May 21 00:47:15.263: INFO: Pod downward-api-94a929ea-137a-4544-bd2d-8e120b7f2dff no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:47:15.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7138" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":213,"skipped":3592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:47:15.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:47:15.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-429" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":214,"skipped":3649,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:47:15.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-2722aef8-45d5-4f53-8144-dc4a8fc862b6 STEP: Creating a pod to test consume configMaps May 21 00:47:15.614: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411" in namespace "configmap-6485" to be "Succeeded or Failed" May 21 00:47:15.628: INFO: Pod "pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411": Phase="Pending", Reason="", readiness=false. Elapsed: 14.380678ms May 21 00:47:17.638: INFO: Pod "pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023848116s May 21 00:47:19.641: INFO: Pod "pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411": Phase="Running", Reason="", readiness=true. Elapsed: 4.026938998s May 21 00:47:21.646: INFO: Pod "pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031786164s STEP: Saw pod success May 21 00:47:21.646: INFO: Pod "pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411" satisfied condition "Succeeded or Failed" May 21 00:47:21.649: INFO: Trying to get logs from node latest-worker pod pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411 container configmap-volume-test: STEP: delete the pod May 21 00:47:21.687: INFO: Waiting for pod pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411 to disappear May 21 00:47:21.695: INFO: Pod pod-configmaps-7a08539d-8102-4b32-bf7f-d8e0bcf7a411 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:47:21.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6485" for this suite. • [SLOW TEST:6.225 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:47:21.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 21 00:47:21.805: INFO: Created pod &Pod{ObjectMeta:{dns-6685 dns-6685 /api/v1/namespaces/dns-6685/pods/dns-6685 4a60a629-42ec-44f2-ad43-ea7700cb58ff 6364163 0 2020-05-21 00:47:21 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-21 00:47:21 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bpwpq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bpwpq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bpwpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 21 00:47:21.815: INFO: The status of Pod dns-6685 is Pending, waiting for it to be Running (with Ready = true) May 21 00:47:23.819: INFO: The status of Pod dns-6685 is Pending, waiting for it to be Running (with Ready = true) May 21 00:47:25.819: INFO: The status of Pod dns-6685 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 21 00:47:25.820: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6685 PodName:dns-6685 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:47:25.820: INFO: >>> kubeConfig: /root/.kube/config I0521 00:47:25.854046 8 log.go:172] (0xc0051484d0) (0xc001906aa0) Create stream I0521 00:47:25.854099 8 log.go:172] (0xc0051484d0) (0xc001906aa0) Stream added, broadcasting: 1 I0521 00:47:25.856025 8 log.go:172] (0xc0051484d0) Reply frame received for 1 I0521 00:47:25.856068 8 log.go:172] (0xc0051484d0) (0xc001aa5680) Create stream I0521 00:47:25.856083 8 log.go:172] (0xc0051484d0) (0xc001aa5680) Stream added, broadcasting: 3 I0521 00:47:25.857337 8 log.go:172] (0xc0051484d0) Reply frame received for 3 I0521 00:47:25.857382 8 log.go:172] (0xc0051484d0) (0xc0012783c0) Create stream I0521 00:47:25.857397 8 log.go:172] (0xc0051484d0) (0xc0012783c0) Stream added, broadcasting: 5 I0521 00:47:25.858396 8 log.go:172] (0xc0051484d0) Reply frame received for 5 I0521 00:47:25.943677 8 log.go:172] (0xc0051484d0) Data frame received for 3 I0521 00:47:25.943704 8 log.go:172] (0xc001aa5680) (3) Data frame handling I0521 00:47:25.943719 8 log.go:172] (0xc001aa5680) (3) Data frame sent I0521 00:47:25.945982 8 log.go:172] (0xc0051484d0) Data frame received for 5 I0521 00:47:25.946018 8 log.go:172] (0xc0012783c0) (5) Data frame handling I0521 00:47:25.946056 8 log.go:172] (0xc0051484d0) Data frame received for 3 I0521 00:47:25.946098 8 log.go:172] (0xc001aa5680) (3) Data frame handling I0521 00:47:25.947805 8 log.go:172] (0xc0051484d0) Data frame received for 1 I0521 00:47:25.947864 8 log.go:172] (0xc001906aa0) (1) Data frame handling I0521 00:47:25.947920 8 log.go:172] (0xc001906aa0) (1) Data frame sent I0521 00:47:25.947951 8 log.go:172] (0xc0051484d0) (0xc001906aa0) Stream removed, broadcasting: 1 I0521 00:47:25.947988 8 log.go:172] (0xc0051484d0) Go away received I0521 00:47:25.948051 8 log.go:172] (0xc0051484d0) (0xc001906aa0) Stream removed, broadcasting: 1 I0521 00:47:25.948076 8 log.go:172] (0xc0051484d0) (0xc001aa5680) Stream removed, broadcasting: 3 I0521 00:47:25.948095 8 log.go:172] (0xc0051484d0) (0xc0012783c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 21 00:47:25.948: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6685 PodName:dns-6685 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:47:25.948: INFO: >>> kubeConfig: /root/.kube/config I0521 00:47:25.979648 8 log.go:172] (0xc00510cb00) (0xc0010c2d20) Create stream I0521 00:47:25.979673 8 log.go:172] (0xc00510cb00) (0xc0010c2d20) Stream added, broadcasting: 1 I0521 00:47:25.981579 8 log.go:172] (0xc00510cb00) Reply frame received for 1 I0521 00:47:25.981637 8 log.go:172] (0xc00510cb00) (0xc0012785a0) Create stream I0521 00:47:25.981652 8 log.go:172] (0xc00510cb00) (0xc0012785a0) Stream added, broadcasting: 3 I0521 00:47:25.982471 8 log.go:172] (0xc00510cb00) Reply frame received for 3 I0521 00:47:25.982506 8 log.go:172] (0xc00510cb00) (0xc0012786e0) Create stream I0521 00:47:25.982519 8 log.go:172] (0xc00510cb00) (0xc0012786e0) Stream added, broadcasting: 5 I0521 00:47:25.983220 8 log.go:172] (0xc00510cb00) Reply frame received for 5 I0521 00:47:26.077654 8 log.go:172] (0xc00510cb00) Data frame received for 3 I0521 00:47:26.077691 8 log.go:172] (0xc0012785a0) (3) Data frame handling I0521 00:47:26.077716 8 log.go:172] (0xc0012785a0) (3) Data frame sent I0521 00:47:26.079873 8 log.go:172] (0xc00510cb00) Data frame received for 5 I0521 00:47:26.079916 8 log.go:172] (0xc00510cb00) Data frame received for 3 I0521 00:47:26.079968 8 log.go:172] (0xc0012785a0) (3) Data frame handling I0521 00:47:26.080003 8 log.go:172] (0xc0012786e0) (5) Data frame handling I0521 00:47:26.081767 8 log.go:172] (0xc00510cb00) Data frame received for 1 I0521 00:47:26.081784 8 log.go:172] (0xc0010c2d20) (1) Data frame handling I0521 00:47:26.081803 8 log.go:172] (0xc0010c2d20) (1) Data frame sent I0521 00:47:26.081977 8 log.go:172] (0xc00510cb00) (0xc0010c2d20) Stream removed, broadcasting: 1 I0521 00:47:26.082039 8 log.go:172] (0xc00510cb00) Go away received I0521 00:47:26.082120 8 log.go:172] (0xc00510cb00) (0xc0010c2d20) Stream removed, broadcasting: 1 I0521 00:47:26.082139 8 log.go:172] (0xc00510cb00) (0xc0012785a0) Stream removed, broadcasting: 3 I0521 00:47:26.082147 8 log.go:172] (0xc00510cb00) (0xc0012786e0) Stream removed, broadcasting: 5 May 21 00:47:26.082: INFO: Deleting pod dns-6685... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:47:26.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6685" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":216,"skipped":3695,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:47:26.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:48:26.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5051" for this suite. • [SLOW TEST:60.290 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:48:26.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 00:48:26.517: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 00:48:26.530: INFO: Waiting for terminating namespaces to be deleted... May 21 00:48:26.533: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 21 00:48:26.537: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 21 00:48:26.537: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 21 00:48:26.537: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 21 00:48:26.537: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 21 00:48:26.537: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:48:26.537: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:48:26.537: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 21 00:48:26.537: INFO: Container kube-proxy ready: true, restart count 0 May 21 00:48:26.537: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 21 00:48:26.540: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 21 00:48:26.540: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 21 00:48:26.540: INFO: test-webserver-e5f008a6-f2de-4ec6-b791-8ff91fdcda71 from container-probe-5051 started at 2020-05-21 00:47:26 +0000 UTC (1 container statuses recorded) May 21 00:48:26.540: INFO: Container test-webserver ready: false, restart count 0 May 21 00:48:26.540: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 21 00:48:26.540: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 21 00:48:26.540: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:48:26.540: INFO: Container kindnet-cni ready: true, restart count 0 May 21 00:48:26.540: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 21 00:48:26.540: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-03fcb9e2-1ee2-44d9-b5bd-228171408ff5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-03fcb9e2-1ee2-44d9-b5bd-228171408ff5 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-03fcb9e2-1ee2-44d9-b5bd-228171408ff5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:48:34.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2008" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.336 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":218,"skipped":3722,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:48:34.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 21 00:48:40.850: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8674 PodName:pod-sharedvolume-33881487-cab2-4c5a-a4d9-f5e994aa88ec ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:40.850: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:40.881347 8 log.go:172] (0xc004914fd0) (0xc0014e2280) Create stream I0521 00:48:40.881377 8 log.go:172] (0xc004914fd0) (0xc0014e2280) Stream added, broadcasting: 1 I0521 00:48:40.883462 8 log.go:172] (0xc004914fd0) Reply frame received for 1 I0521 00:48:40.883507 8 log.go:172] (0xc004914fd0) (0xc000da4000) Create stream I0521 00:48:40.883523 8 log.go:172] (0xc004914fd0) (0xc000da4000) Stream added, broadcasting: 3 I0521 00:48:40.884602 8 log.go:172] (0xc004914fd0) Reply frame received for 3 I0521 00:48:40.884652 8 log.go:172] (0xc004914fd0) (0xc0014e23c0) Create stream I0521 00:48:40.884665 8 log.go:172] (0xc004914fd0) (0xc0014e23c0) Stream added, broadcasting: 5 I0521 00:48:40.885767 8 log.go:172] (0xc004914fd0) Reply frame received for 5 I0521 00:48:40.964531 8 log.go:172] (0xc004914fd0) Data frame received for 5 I0521 00:48:40.964590 8 log.go:172] (0xc0014e23c0) (5) Data frame handling I0521 00:48:40.964732 8 log.go:172] (0xc004914fd0) Data frame received for 3 I0521 00:48:40.964758 8 log.go:172] (0xc000da4000) (3) Data frame handling I0521 00:48:40.964783 8 log.go:172] (0xc000da4000) (3) Data frame sent I0521 00:48:40.964803 8 log.go:172] (0xc004914fd0) Data frame received for 3 I0521 00:48:40.964811 8 log.go:172] (0xc000da4000) (3) Data frame handling I0521 00:48:40.966330 8 log.go:172] (0xc004914fd0) Data frame received for 1 I0521 00:48:40.966375 8 log.go:172] (0xc0014e2280) (1) Data frame handling I0521 00:48:40.966388 8 log.go:172] (0xc0014e2280) (1) Data frame sent I0521 00:48:40.966402 8 log.go:172] (0xc004914fd0) (0xc0014e2280) Stream removed, broadcasting: 1 I0521 00:48:40.966416 8 log.go:172] (0xc004914fd0) Go away received I0521 00:48:40.966592 8 log.go:172] (0xc004914fd0) (0xc0014e2280) Stream removed, broadcasting: 1 I0521 00:48:40.966630 8 log.go:172] (0xc004914fd0) (0xc000da4000) Stream removed, broadcasting: 3 I0521 00:48:40.966647 8 log.go:172] (0xc004914fd0) (0xc0014e23c0) Stream removed, broadcasting: 5 May 21 00:48:40.966: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:48:40.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8674" for this suite. • [SLOW TEST:6.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":219,"skipped":3726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:48:40.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 21 00:48:53.186: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.186: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.219200 8 log.go:172] (0xc005182790) (0xc0012cdcc0) Create stream I0521 00:48:53.219229 8 log.go:172] (0xc005182790) (0xc0012cdcc0) Stream added, broadcasting: 1 I0521 00:48:53.221656 8 log.go:172] (0xc005182790) Reply frame received for 1 I0521 00:48:53.221697 8 log.go:172] (0xc005182790) (0xc002b29f40) Create stream I0521 00:48:53.221711 8 log.go:172] (0xc005182790) (0xc002b29f40) Stream added, broadcasting: 3 I0521 00:48:53.222546 8 log.go:172] (0xc005182790) Reply frame received for 3 I0521 00:48:53.222571 8 log.go:172] (0xc005182790) (0xc0012cde00) Create stream I0521 00:48:53.222583 8 log.go:172] (0xc005182790) (0xc0012cde00) Stream added, broadcasting: 5 I0521 00:48:53.223386 8 log.go:172] (0xc005182790) Reply frame received for 5 I0521 00:48:53.305847 8 log.go:172] (0xc005182790) Data frame received for 3 I0521 00:48:53.305885 8 log.go:172] (0xc002b29f40) (3) Data frame handling I0521 00:48:53.305922 8 log.go:172] (0xc002b29f40) (3) Data frame sent I0521 00:48:53.305942 8 log.go:172] (0xc005182790) Data frame received for 3 I0521 00:48:53.305957 8 log.go:172] (0xc002b29f40) (3) Data frame handling I0521 00:48:53.305980 8 log.go:172] (0xc005182790) Data frame received for 5 I0521 00:48:53.305996 8 log.go:172] (0xc0012cde00) (5) Data frame handling I0521 00:48:53.307836 8 log.go:172] (0xc005182790) Data frame received for 1 I0521 00:48:53.307861 8 log.go:172] (0xc0012cdcc0) (1) Data frame handling I0521 00:48:53.307881 8 log.go:172] (0xc0012cdcc0) (1) Data frame sent I0521 00:48:53.307903 8 log.go:172] (0xc005182790) (0xc0012cdcc0) Stream removed, broadcasting: 1 I0521 00:48:53.308010 8 log.go:172] (0xc005182790) (0xc0012cdcc0) Stream removed, broadcasting: 1 I0521 00:48:53.308033 8 log.go:172] (0xc005182790) (0xc002b29f40) Stream removed, broadcasting: 3 I0521 00:48:53.308050 8 log.go:172] (0xc005182790) (0xc0012cde00) Stream removed, broadcasting: 5 May 21 00:48:53.308: INFO: Exec stderr: "" May 21 00:48:53.308: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.308: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.309366 8 log.go:172] (0xc005182790) Go away received I0521 00:48:53.339552 8 log.go:172] (0xc005148f20) (0xc001452f00) Create stream I0521 00:48:53.339573 8 log.go:172] (0xc005148f20) (0xc001452f00) Stream added, broadcasting: 1 I0521 00:48:53.347612 8 log.go:172] (0xc005148f20) Reply frame received for 1 I0521 00:48:53.347670 8 log.go:172] (0xc005148f20) (0xc000e5e140) Create stream I0521 00:48:53.347705 8 log.go:172] (0xc005148f20) (0xc000e5e140) Stream added, broadcasting: 3 I0521 00:48:53.348859 8 log.go:172] (0xc005148f20) Reply frame received for 3 I0521 00:48:53.348917 8 log.go:172] (0xc005148f20) (0xc0006783c0) Create stream I0521 00:48:53.348969 8 log.go:172] (0xc005148f20) (0xc0006783c0) Stream added, broadcasting: 5 I0521 00:48:53.350359 8 log.go:172] (0xc005148f20) Reply frame received for 5 I0521 00:48:53.418283 8 log.go:172] (0xc005148f20) Data frame received for 3 I0521 00:48:53.418325 8 log.go:172] (0xc000e5e140) (3) Data frame handling I0521 00:48:53.418341 8 log.go:172] (0xc000e5e140) (3) Data frame sent I0521 00:48:53.418746 8 log.go:172] (0xc005148f20) Data frame received for 5 I0521 00:48:53.418779 8 log.go:172] (0xc0006783c0) (5) Data frame handling I0521 00:48:53.418839 8 log.go:172] (0xc005148f20) Data frame received for 3 I0521 00:48:53.418865 8 log.go:172] (0xc000e5e140) (3) Data frame handling I0521 00:48:53.420528 8 log.go:172] (0xc005148f20) Data frame received for 1 I0521 00:48:53.420564 8 log.go:172] (0xc001452f00) (1) Data frame handling I0521 00:48:53.420584 8 log.go:172] (0xc001452f00) (1) Data frame sent I0521 00:48:53.420627 8 log.go:172] (0xc005148f20) (0xc001452f00) Stream removed, broadcasting: 1 I0521 00:48:53.420649 8 log.go:172] (0xc005148f20) Go away received I0521 00:48:53.420796 8 log.go:172] (0xc005148f20) (0xc001452f00) Stream removed, broadcasting: 1 I0521 00:48:53.420870 8 log.go:172] (0xc005148f20) (0xc000e5e140) Stream removed, broadcasting: 3 I0521 00:48:53.420902 8 log.go:172] (0xc005148f20) (0xc0006783c0) Stream removed, broadcasting: 5 May 21 00:48:53.420: INFO: Exec stderr: "" May 21 00:48:53.420: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.421: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.450271 8 log.go:172] (0xc0023704d0) (0xc0011b28c0) Create stream I0521 00:48:53.450302 8 log.go:172] (0xc0023704d0) (0xc0011b28c0) Stream added, broadcasting: 1 I0521 00:48:53.452299 8 log.go:172] (0xc0023704d0) Reply frame received for 1 I0521 00:48:53.452351 8 log.go:172] (0xc0023704d0) (0xc0013a0000) Create stream I0521 00:48:53.452370 8 log.go:172] (0xc0023704d0) (0xc0013a0000) Stream added, broadcasting: 3 I0521 00:48:53.453669 8 log.go:172] (0xc0023704d0) Reply frame received for 3 I0521 00:48:53.453720 8 log.go:172] (0xc0023704d0) (0xc00124c000) Create stream I0521 00:48:53.453736 8 log.go:172] (0xc0023704d0) (0xc00124c000) Stream added, broadcasting: 5 I0521 00:48:53.454925 8 log.go:172] (0xc0023704d0) Reply frame received for 5 I0521 00:48:53.512415 8 log.go:172] (0xc0023704d0) Data frame received for 5 I0521 00:48:53.512444 8 log.go:172] (0xc00124c000) (5) Data frame handling I0521 00:48:53.512493 8 log.go:172] (0xc0023704d0) Data frame received for 3 I0521 00:48:53.512527 8 log.go:172] (0xc0013a0000) (3) Data frame handling I0521 00:48:53.512544 8 log.go:172] (0xc0013a0000) (3) Data frame sent I0521 00:48:53.512556 8 log.go:172] (0xc0023704d0) Data frame received for 3 I0521 00:48:53.512563 8 log.go:172] (0xc0013a0000) (3) Data frame handling I0521 00:48:53.513921 8 log.go:172] (0xc0023704d0) Data frame received for 1 I0521 00:48:53.513945 8 log.go:172] (0xc0011b28c0) (1) Data frame handling I0521 00:48:53.513957 8 log.go:172] (0xc0011b28c0) (1) Data frame sent I0521 00:48:53.514004 8 log.go:172] (0xc0023704d0) (0xc0011b28c0) Stream removed, broadcasting: 1 I0521 00:48:53.514078 8 log.go:172] (0xc0023704d0) Go away received I0521 00:48:53.514108 8 log.go:172] (0xc0023704d0) (0xc0011b28c0) Stream removed, broadcasting: 1 I0521 00:48:53.514121 8 log.go:172] (0xc0023704d0) (0xc0013a0000) Stream removed, broadcasting: 3 I0521 00:48:53.514130 8 log.go:172] (0xc0023704d0) (0xc00124c000) Stream removed, broadcasting: 5 May 21 00:48:53.514: INFO: Exec stderr: "" May 21 00:48:53.514: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.514: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.543755 8 log.go:172] (0xc0021cca50) (0xc00124c3c0) Create stream I0521 00:48:53.543772 8 log.go:172] (0xc0021cca50) (0xc00124c3c0) Stream added, broadcasting: 1 I0521 00:48:53.545800 8 log.go:172] (0xc0021cca50) Reply frame received for 1 I0521 00:48:53.545855 8 log.go:172] (0xc0021cca50) (0xc000bbe0a0) Create stream I0521 00:48:53.545928 8 log.go:172] (0xc0021cca50) (0xc000bbe0a0) Stream added, broadcasting: 3 I0521 00:48:53.546943 8 log.go:172] (0xc0021cca50) Reply frame received for 3 I0521 00:48:53.546990 8 log.go:172] (0xc0021cca50) (0xc0011b3220) Create stream I0521 00:48:53.547002 8 log.go:172] (0xc0021cca50) (0xc0011b3220) Stream added, broadcasting: 5 I0521 00:48:53.547867 8 log.go:172] (0xc0021cca50) Reply frame received for 5 I0521 00:48:53.605049 8 log.go:172] (0xc0021cca50) Data frame received for 5 I0521 00:48:53.605102 8 log.go:172] (0xc0011b3220) (5) Data frame handling I0521 00:48:53.605310 8 log.go:172] (0xc0021cca50) Data frame received for 3 I0521 00:48:53.605330 8 log.go:172] (0xc000bbe0a0) (3) Data frame handling I0521 00:48:53.605350 8 log.go:172] (0xc000bbe0a0) (3) Data frame sent I0521 00:48:53.605366 8 log.go:172] (0xc0021cca50) Data frame received for 3 I0521 00:48:53.605374 8 log.go:172] (0xc000bbe0a0) (3) Data frame handling I0521 00:48:53.606773 8 log.go:172] (0xc0021cca50) Data frame received for 1 I0521 00:48:53.606800 8 log.go:172] (0xc00124c3c0) (1) Data frame handling I0521 00:48:53.606822 8 log.go:172] (0xc00124c3c0) (1) Data frame sent I0521 00:48:53.606850 8 log.go:172] (0xc0021cca50) (0xc00124c3c0) Stream removed, broadcasting: 1 I0521 00:48:53.606929 8 log.go:172] (0xc0021cca50) (0xc00124c3c0) Stream removed, broadcasting: 1 I0521 00:48:53.606945 8 log.go:172] (0xc0021cca50) (0xc000bbe0a0) Stream removed, broadcasting: 3 I0521 00:48:53.606953 8 log.go:172] (0xc0021cca50) (0xc0011b3220) Stream removed, broadcasting: 5 May 21 00:48:53.606: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 21 00:48:53.606: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.607: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.607043 8 log.go:172] (0xc0021cca50) Go away received I0521 00:48:53.634093 8 log.go:172] (0xc002c8cdc0) (0xc0013a0960) Create stream I0521 00:48:53.634119 8 log.go:172] (0xc002c8cdc0) (0xc0013a0960) Stream added, broadcasting: 1 I0521 00:48:53.636021 8 log.go:172] (0xc002c8cdc0) Reply frame received for 1 I0521 00:48:53.636066 8 log.go:172] (0xc002c8cdc0) (0xc000bbe5a0) Create stream I0521 00:48:53.636085 8 log.go:172] (0xc002c8cdc0) (0xc000bbe5a0) Stream added, broadcasting: 3 I0521 00:48:53.637003 8 log.go:172] (0xc002c8cdc0) Reply frame received for 3 I0521 00:48:53.637037 8 log.go:172] (0xc002c8cdc0) (0xc0011b34a0) Create stream I0521 00:48:53.637050 8 log.go:172] (0xc002c8cdc0) (0xc0011b34a0) Stream added, broadcasting: 5 I0521 00:48:53.638054 8 log.go:172] (0xc002c8cdc0) Reply frame received for 5 I0521 00:48:53.703642 8 log.go:172] (0xc002c8cdc0) Data frame received for 5 I0521 00:48:53.703718 8 log.go:172] (0xc0011b34a0) (5) Data frame handling I0521 00:48:53.703755 8 log.go:172] (0xc002c8cdc0) Data frame received for 3 I0521 00:48:53.703773 8 log.go:172] (0xc000bbe5a0) (3) Data frame handling I0521 00:48:53.703790 8 log.go:172] (0xc000bbe5a0) (3) Data frame sent I0521 00:48:53.703803 8 log.go:172] (0xc002c8cdc0) Data frame received for 3 I0521 00:48:53.703816 8 log.go:172] (0xc000bbe5a0) (3) Data frame handling I0521 00:48:53.704893 8 log.go:172] (0xc002c8cdc0) Data frame received for 1 I0521 00:48:53.704923 8 log.go:172] (0xc0013a0960) (1) Data frame handling I0521 00:48:53.704944 8 log.go:172] (0xc0013a0960) (1) Data frame sent I0521 00:48:53.704959 8 log.go:172] (0xc002c8cdc0) (0xc0013a0960) Stream removed, broadcasting: 1 I0521 00:48:53.704984 8 log.go:172] (0xc002c8cdc0) Go away received I0521 00:48:53.705285 8 log.go:172] (0xc002c8cdc0) (0xc0013a0960) Stream removed, broadcasting: 1 I0521 00:48:53.705309 8 log.go:172] (0xc002c8cdc0) (0xc000bbe5a0) Stream removed, broadcasting: 3 I0521 00:48:53.705318 8 log.go:172] (0xc002c8cdc0) (0xc0011b34a0) Stream removed, broadcasting: 5 May 21 00:48:53.705: INFO: Exec stderr: "" May 21 00:48:53.705: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.705: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.731435 8 log.go:172] (0xc002370bb0) (0xc0003ee640) Create stream I0521 00:48:53.731473 8 log.go:172] (0xc002370bb0) (0xc0003ee640) Stream added, broadcasting: 1 I0521 00:48:53.733790 8 log.go:172] (0xc002370bb0) Reply frame received for 1 I0521 00:48:53.733836 8 log.go:172] (0xc002370bb0) (0xc001348000) Create stream I0521 00:48:53.733847 8 log.go:172] (0xc002370bb0) (0xc001348000) Stream added, broadcasting: 3 I0521 00:48:53.734816 8 log.go:172] (0xc002370bb0) Reply frame received for 3 I0521 00:48:53.734843 8 log.go:172] (0xc002370bb0) (0xc001348460) Create stream I0521 00:48:53.734862 8 log.go:172] (0xc002370bb0) (0xc001348460) Stream added, broadcasting: 5 I0521 00:48:53.735811 8 log.go:172] (0xc002370bb0) Reply frame received for 5 I0521 00:48:53.807998 8 log.go:172] (0xc002370bb0) Data frame received for 5 I0521 00:48:53.808032 8 log.go:172] (0xc001348460) (5) Data frame handling I0521 00:48:53.808053 8 log.go:172] (0xc002370bb0) Data frame received for 3 I0521 00:48:53.808063 8 log.go:172] (0xc001348000) (3) Data frame handling I0521 00:48:53.808073 8 log.go:172] (0xc001348000) (3) Data frame sent I0521 00:48:53.808082 8 log.go:172] (0xc002370bb0) Data frame received for 3 I0521 00:48:53.808089 8 log.go:172] (0xc001348000) (3) Data frame handling I0521 00:48:53.809560 8 log.go:172] (0xc002370bb0) Data frame received for 1 I0521 00:48:53.809592 8 log.go:172] (0xc0003ee640) (1) Data frame handling I0521 00:48:53.809611 8 log.go:172] (0xc0003ee640) (1) Data frame sent I0521 00:48:53.809627 8 log.go:172] (0xc002370bb0) (0xc0003ee640) Stream removed, broadcasting: 1 I0521 00:48:53.809645 8 log.go:172] (0xc002370bb0) Go away received I0521 00:48:53.809769 8 log.go:172] (0xc002370bb0) (0xc0003ee640) Stream removed, broadcasting: 1 I0521 00:48:53.809792 8 log.go:172] (0xc002370bb0) (0xc001348000) Stream removed, broadcasting: 3 I0521 00:48:53.809801 8 log.go:172] (0xc002370bb0) (0xc001348460) Stream removed, broadcasting: 5 May 21 00:48:53.809: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 21 00:48:53.809: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.809: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.839228 8 log.go:172] (0xc0025b0370) (0xc001349a40) Create stream I0521 00:48:53.839266 8 log.go:172] (0xc0025b0370) (0xc001349a40) Stream added, broadcasting: 1 I0521 00:48:53.841603 8 log.go:172] (0xc0025b0370) Reply frame received for 1 I0521 00:48:53.841636 8 log.go:172] (0xc0025b0370) (0xc001349e00) Create stream I0521 00:48:53.841651 8 log.go:172] (0xc0025b0370) (0xc001349e00) Stream added, broadcasting: 3 I0521 00:48:53.842639 8 log.go:172] (0xc0025b0370) Reply frame received for 3 I0521 00:48:53.842672 8 log.go:172] (0xc0025b0370) (0xc000bbe960) Create stream I0521 00:48:53.842685 8 log.go:172] (0xc0025b0370) (0xc000bbe960) Stream added, broadcasting: 5 I0521 00:48:53.843664 8 log.go:172] (0xc0025b0370) Reply frame received for 5 I0521 00:48:53.922163 8 log.go:172] (0xc0025b0370) Data frame received for 5 I0521 00:48:53.922265 8 log.go:172] (0xc000bbe960) (5) Data frame handling I0521 00:48:53.922311 8 log.go:172] (0xc0025b0370) Data frame received for 3 I0521 00:48:53.922336 8 log.go:172] (0xc001349e00) (3) Data frame handling I0521 00:48:53.922371 8 log.go:172] (0xc001349e00) (3) Data frame sent I0521 00:48:53.922392 8 log.go:172] (0xc0025b0370) Data frame received for 3 I0521 00:48:53.922409 8 log.go:172] (0xc001349e00) (3) Data frame handling I0521 00:48:53.923764 8 log.go:172] (0xc0025b0370) Data frame received for 1 I0521 00:48:53.923811 8 log.go:172] (0xc001349a40) (1) Data frame handling I0521 00:48:53.923845 8 log.go:172] (0xc001349a40) (1) Data frame sent I0521 00:48:53.923889 8 log.go:172] (0xc0025b0370) (0xc001349a40) Stream removed, broadcasting: 1 I0521 00:48:53.924013 8 log.go:172] (0xc0025b0370) Go away received I0521 00:48:53.924068 8 log.go:172] (0xc0025b0370) (0xc001349a40) Stream removed, broadcasting: 1 I0521 00:48:53.924100 8 log.go:172] (0xc0025b0370) (0xc001349e00) Stream removed, broadcasting: 3 I0521 00:48:53.924113 8 log.go:172] (0xc0025b0370) (0xc000bbe960) Stream removed, broadcasting: 5 May 21 00:48:53.924: INFO: Exec stderr: "" May 21 00:48:53.924: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:53.924: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:53.954715 8 log.go:172] (0xc002c8da20) (0xc0013a0dc0) Create stream I0521 00:48:53.954739 8 log.go:172] (0xc002c8da20) (0xc0013a0dc0) Stream added, broadcasting: 1 I0521 00:48:53.956632 8 log.go:172] (0xc002c8da20) Reply frame received for 1 I0521 00:48:53.956669 8 log.go:172] (0xc002c8da20) (0xc0003eeaa0) Create stream I0521 00:48:53.956681 8 log.go:172] (0xc002c8da20) (0xc0003eeaa0) Stream added, broadcasting: 3 I0521 00:48:53.957738 8 log.go:172] (0xc002c8da20) Reply frame received for 3 I0521 00:48:53.957808 8 log.go:172] (0xc002c8da20) (0xc000bbeaa0) Create stream I0521 00:48:53.957822 8 log.go:172] (0xc002c8da20) (0xc000bbeaa0) Stream added, broadcasting: 5 I0521 00:48:53.958799 8 log.go:172] (0xc002c8da20) Reply frame received for 5 I0521 00:48:54.022768 8 log.go:172] (0xc002c8da20) Data frame received for 3 I0521 00:48:54.022798 8 log.go:172] (0xc0003eeaa0) (3) Data frame handling I0521 00:48:54.022816 8 log.go:172] (0xc0003eeaa0) (3) Data frame sent I0521 00:48:54.022827 8 log.go:172] (0xc002c8da20) Data frame received for 3 I0521 00:48:54.022843 8 log.go:172] (0xc0003eeaa0) (3) Data frame handling I0521 00:48:54.022865 8 log.go:172] (0xc002c8da20) Data frame received for 5 I0521 00:48:54.022880 8 log.go:172] (0xc000bbeaa0) (5) Data frame handling I0521 00:48:54.024246 8 log.go:172] (0xc002c8da20) Data frame received for 1 I0521 00:48:54.024261 8 log.go:172] (0xc0013a0dc0) (1) Data frame handling I0521 00:48:54.024267 8 log.go:172] (0xc0013a0dc0) (1) Data frame sent I0521 00:48:54.024275 8 log.go:172] (0xc002c8da20) (0xc0013a0dc0) Stream removed, broadcasting: 1 I0521 00:48:54.024361 8 log.go:172] (0xc002c8da20) (0xc0013a0dc0) Stream removed, broadcasting: 1 I0521 00:48:54.024374 8 log.go:172] (0xc002c8da20) (0xc0003eeaa0) Stream removed, broadcasting: 3 I0521 00:48:54.024529 8 log.go:172] (0xc002c8da20) (0xc000bbeaa0) Stream removed, broadcasting: 5 I0521 00:48:54.024611 8 log.go:172] (0xc002c8da20) Go away received May 21 00:48:54.024: INFO: Exec stderr: "" May 21 00:48:54.024: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:54.024: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:54.060312 8 log.go:172] (0xc001d12160) (0xc0013a12c0) Create stream I0521 00:48:54.060340 8 log.go:172] (0xc001d12160) (0xc0013a12c0) Stream added, broadcasting: 1 I0521 00:48:54.063269 8 log.go:172] (0xc001d12160) Reply frame received for 1 I0521 00:48:54.063312 8 log.go:172] (0xc001d12160) (0xc0013a1860) Create stream I0521 00:48:54.063320 8 log.go:172] (0xc001d12160) (0xc0013a1860) Stream added, broadcasting: 3 I0521 00:48:54.064277 8 log.go:172] (0xc001d12160) Reply frame received for 3 I0521 00:48:54.064321 8 log.go:172] (0xc001d12160) (0xc0013a1e00) Create stream I0521 00:48:54.064334 8 log.go:172] (0xc001d12160) (0xc0013a1e00) Stream added, broadcasting: 5 I0521 00:48:54.065664 8 log.go:172] (0xc001d12160) Reply frame received for 5 I0521 00:48:54.125940 8 log.go:172] (0xc001d12160) Data frame received for 3 I0521 00:48:54.125972 8 log.go:172] (0xc0013a1860) (3) Data frame handling I0521 00:48:54.125991 8 log.go:172] (0xc0013a1860) (3) Data frame sent I0521 00:48:54.126133 8 log.go:172] (0xc001d12160) Data frame received for 5 I0521 00:48:54.126166 8 log.go:172] (0xc0013a1e00) (5) Data frame handling I0521 00:48:54.126188 8 log.go:172] (0xc001d12160) Data frame received for 3 I0521 00:48:54.126202 8 log.go:172] (0xc0013a1860) (3) Data frame handling I0521 00:48:54.127167 8 log.go:172] (0xc001d12160) Data frame received for 1 I0521 00:48:54.127182 8 log.go:172] (0xc0013a12c0) (1) Data frame handling I0521 00:48:54.127192 8 log.go:172] (0xc0013a12c0) (1) Data frame sent I0521 00:48:54.127207 8 log.go:172] (0xc001d12160) (0xc0013a12c0) Stream removed, broadcasting: 1 I0521 00:48:54.127224 8 log.go:172] (0xc001d12160) Go away received I0521 00:48:54.127330 8 log.go:172] (0xc001d12160) (0xc0013a12c0) Stream removed, broadcasting: 1 I0521 00:48:54.127346 8 log.go:172] (0xc001d12160) (0xc0013a1860) Stream removed, broadcasting: 3 I0521 00:48:54.127353 8 log.go:172] (0xc001d12160) (0xc0013a1e00) Stream removed, broadcasting: 5 May 21 00:48:54.127: INFO: Exec stderr: "" May 21 00:48:54.127: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4854 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:48:54.127: INFO: >>> kubeConfig: /root/.kube/config I0521 00:48:54.153883 8 log.go:172] (0xc00510c420) (0xc000bbf4a0) Create stream I0521 00:48:54.153912 8 log.go:172] (0xc00510c420) (0xc000bbf4a0) Stream added, broadcasting: 1 I0521 00:48:54.156124 8 log.go:172] (0xc00510c420) Reply frame received for 1 I0521 00:48:54.156158 8 log.go:172] (0xc00510c420) (0xc0010466e0) Create stream I0521 00:48:54.156170 8 log.go:172] (0xc00510c420) (0xc0010466e0) Stream added, broadcasting: 3 I0521 00:48:54.157315 8 log.go:172] (0xc00510c420) Reply frame received for 3 I0521 00:48:54.157360 8 log.go:172] (0xc00510c420) (0xc00124c5a0) Create stream I0521 00:48:54.157374 8 log.go:172] (0xc00510c420) (0xc00124c5a0) Stream added, broadcasting: 5 I0521 00:48:54.158392 8 log.go:172] (0xc00510c420) Reply frame received for 5 I0521 00:48:54.213886 8 log.go:172] (0xc00510c420) Data frame received for 5 I0521 00:48:54.213924 8 log.go:172] (0xc00124c5a0) (5) Data frame handling I0521 00:48:54.213955 8 log.go:172] (0xc00510c420) Data frame received for 3 I0521 00:48:54.213965 8 log.go:172] (0xc0010466e0) (3) Data frame handling I0521 00:48:54.213979 8 log.go:172] (0xc0010466e0) (3) Data frame sent I0521 00:48:54.213988 8 log.go:172] (0xc00510c420) Data frame received for 3 I0521 00:48:54.213993 8 log.go:172] (0xc0010466e0) (3) Data frame handling I0521 00:48:54.215506 8 log.go:172] (0xc00510c420) Data frame received for 1 I0521 00:48:54.215519 8 log.go:172] (0xc000bbf4a0) (1) Data frame handling I0521 00:48:54.215530 8 log.go:172] (0xc000bbf4a0) (1) Data frame sent I0521 00:48:54.215544 8 log.go:172] (0xc00510c420) (0xc000bbf4a0) Stream removed, broadcasting: 1 I0521 00:48:54.215562 8 log.go:172] (0xc00510c420) Go away received I0521 00:48:54.215616 8 log.go:172] (0xc00510c420) (0xc000bbf4a0) Stream removed, broadcasting: 1 I0521 00:48:54.215668 8 log.go:172] (0xc00510c420) (0xc0010466e0) Stream removed, broadcasting: 3 I0521 00:48:54.215686 8 log.go:172] (0xc00510c420) (0xc00124c5a0) Stream removed, broadcasting: 5 May 21 00:48:54.215: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:48:54.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4854" for this suite. • [SLOW TEST:13.250 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":220,"skipped":3756,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:48:54.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:48:54.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 21 00:48:54.883: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-21T00:48:54Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-21T00:48:54Z]] name:name1 resourceVersion:6364613 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cc59e182-b79e-4f1a-a5c0-1726e437ca61] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 21 00:49:04.888: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-21T00:49:04Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-21T00:49:04Z]] name:name2 resourceVersion:6364666 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bdd38cfe-2139-4086-a8bf-a62fa35b7c5e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 21 00:49:14.894: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-21T00:48:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-21T00:49:14Z]] name:name1 resourceVersion:6364696 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cc59e182-b79e-4f1a-a5c0-1726e437ca61] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 21 00:49:24.900: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-21T00:49:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-21T00:49:24Z]] name:name2 resourceVersion:6364724 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bdd38cfe-2139-4086-a8bf-a62fa35b7c5e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 21 00:49:34.918: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-21T00:48:54Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-21T00:49:14Z]] name:name1 resourceVersion:6364758 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cc59e182-b79e-4f1a-a5c0-1726e437ca61] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 21 00:49:44.927: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-21T00:49:04Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-21T00:49:24Z]] name:name2 resourceVersion:6364792 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:bdd38cfe-2139-4086-a8bf-a62fa35b7c5e] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:49:55.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2568" for this suite. • [SLOW TEST:61.231 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":221,"skipped":3757,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:49:55.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:49:55.526: INFO: Waiting up to 5m0s for pod "downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4" in namespace "projected-4315" to be "Succeeded or Failed" May 21 00:49:55.530: INFO: Pod "downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948159ms May 21 00:49:57.534: INFO: Pod "downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007310116s May 21 00:49:59.538: INFO: Pod "downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01210665s STEP: Saw pod success May 21 00:49:59.538: INFO: Pod "downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4" satisfied condition "Succeeded or Failed" May 21 00:49:59.542: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4 container client-container: STEP: delete the pod May 21 00:49:59.586: INFO: Waiting for pod downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4 to disappear May 21 00:49:59.596: INFO: Pod downwardapi-volume-060ceda5-abad-4fa2-9cb1-e79c7c337ef4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:49:59.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4315" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":222,"skipped":3760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:49:59.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 21 00:49:59.649: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:50:06.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9550" for this suite. • [SLOW TEST:6.519 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":223,"skipped":3796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:50:06.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9180 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-9180 I0521 00:50:06.682323 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-9180, replica count: 2 I0521 00:50:09.732750 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 00:50:12.732999 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 00:50:12.733: INFO: Creating new exec pod May 21 00:50:17.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9180 execpod5t8qb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 21 00:50:20.960: INFO: stderr: "I0521 00:50:20.863817 3366 log.go:172] (0xc000942790) (0xc0006dce60) Create stream\nI0521 00:50:20.863844 3366 log.go:172] (0xc000942790) (0xc0006dce60) Stream added, broadcasting: 1\nI0521 00:50:20.866320 3366 log.go:172] (0xc000942790) Reply frame received for 1\nI0521 00:50:20.866357 3366 log.go:172] (0xc000942790) (0xc0006c8be0) Create stream\nI0521 00:50:20.866367 3366 log.go:172] (0xc000942790) (0xc0006c8be0) Stream added, broadcasting: 3\nI0521 00:50:20.867420 3366 log.go:172] (0xc000942790) Reply frame received for 3\nI0521 00:50:20.867446 3366 log.go:172] (0xc000942790) (0xc0006c9b80) Create stream\nI0521 00:50:20.867453 3366 log.go:172] (0xc000942790) (0xc0006c9b80) Stream added, broadcasting: 5\nI0521 00:50:20.868425 3366 log.go:172] (0xc000942790) Reply frame received for 5\nI0521 00:50:20.951085 3366 log.go:172] (0xc000942790) Data frame received for 5\nI0521 00:50:20.951133 3366 log.go:172] (0xc0006c9b80) (5) Data frame handling\nI0521 00:50:20.951168 3366 log.go:172] (0xc0006c9b80) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0521 00:50:20.951289 3366 log.go:172] (0xc000942790) Data frame received for 5\nI0521 00:50:20.951346 3366 log.go:172] (0xc0006c9b80) (5) Data frame handling\nI0521 00:50:20.951583 3366 log.go:172] (0xc000942790) Data frame received for 3\nI0521 00:50:20.951602 3366 log.go:172] (0xc0006c8be0) (3) Data frame handling\nI0521 00:50:20.953490 3366 log.go:172] (0xc000942790) Data frame received for 1\nI0521 00:50:20.953510 3366 log.go:172] (0xc0006dce60) (1) Data frame handling\nI0521 00:50:20.953524 3366 log.go:172] (0xc0006dce60) (1) Data frame sent\nI0521 00:50:20.953536 3366 log.go:172] (0xc000942790) (0xc0006dce60) Stream removed, broadcasting: 1\nI0521 00:50:20.953552 3366 log.go:172] (0xc000942790) Go away received\nI0521 00:50:20.954008 3366 log.go:172] (0xc000942790) (0xc0006dce60) Stream removed, broadcasting: 1\nI0521 00:50:20.954032 3366 log.go:172] (0xc000942790) (0xc0006c8be0) Stream removed, broadcasting: 3\nI0521 00:50:20.954046 3366 log.go:172] (0xc000942790) (0xc0006c9b80) Stream removed, broadcasting: 5\n" May 21 00:50:20.960: INFO: stdout: "" May 21 00:50:20.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-9180 execpod5t8qb -- /bin/sh -x -c nc -zv -t -w 2 10.105.10.152 80' May 21 00:50:21.162: INFO: stderr: "I0521 00:50:21.085091 3399 log.go:172] (0xc00077e8f0) (0xc0005601e0) Create stream\nI0521 00:50:21.085328 3399 log.go:172] (0xc00077e8f0) (0xc0005601e0) Stream added, broadcasting: 1\nI0521 00:50:21.087693 3399 log.go:172] (0xc00077e8f0) Reply frame received for 1\nI0521 00:50:21.087719 3399 log.go:172] (0xc00077e8f0) (0xc00040cd20) Create stream\nI0521 00:50:21.087726 3399 log.go:172] (0xc00077e8f0) (0xc00040cd20) Stream added, broadcasting: 3\nI0521 00:50:21.088600 3399 log.go:172] (0xc00077e8f0) Reply frame received for 3\nI0521 00:50:21.088630 3399 log.go:172] (0xc00077e8f0) (0xc000561180) Create stream\nI0521 00:50:21.088643 3399 log.go:172] (0xc00077e8f0) (0xc000561180) Stream added, broadcasting: 5\nI0521 00:50:21.089706 3399 log.go:172] (0xc00077e8f0) Reply frame received for 5\nI0521 00:50:21.155377 3399 log.go:172] (0xc00077e8f0) Data frame received for 3\nI0521 00:50:21.155420 3399 log.go:172] (0xc00040cd20) (3) Data frame handling\nI0521 00:50:21.155444 3399 log.go:172] (0xc00077e8f0) Data frame received for 5\nI0521 00:50:21.155455 3399 log.go:172] (0xc000561180) (5) Data frame handling\nI0521 00:50:21.155467 3399 log.go:172] (0xc000561180) (5) Data frame sent\nI0521 00:50:21.155476 3399 log.go:172] (0xc00077e8f0) Data frame received for 5\nI0521 00:50:21.155485 3399 log.go:172] (0xc000561180) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.10.152 80\nConnection to 10.105.10.152 80 port [tcp/http] succeeded!\nI0521 00:50:21.156865 3399 log.go:172] (0xc00077e8f0) Data frame received for 1\nI0521 00:50:21.156891 3399 log.go:172] (0xc0005601e0) (1) Data frame handling\nI0521 00:50:21.156909 3399 log.go:172] (0xc0005601e0) (1) Data frame sent\nI0521 00:50:21.157015 3399 log.go:172] (0xc00077e8f0) (0xc0005601e0) Stream removed, broadcasting: 1\nI0521 00:50:21.157065 3399 log.go:172] (0xc00077e8f0) Go away received\nI0521 00:50:21.157714 3399 log.go:172] (0xc00077e8f0) (0xc0005601e0) Stream removed, broadcasting: 1\nI0521 00:50:21.157735 3399 log.go:172] (0xc00077e8f0) (0xc00040cd20) Stream removed, broadcasting: 3\nI0521 00:50:21.157746 3399 log.go:172] (0xc00077e8f0) (0xc000561180) Stream removed, broadcasting: 5\n" May 21 00:50:21.163: INFO: stdout: "" May 21 00:50:21.163: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:50:21.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9180" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:15.092 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":224,"skipped":3819,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:50:21.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-11277c23-520e-44af-a235-bebfd0095a16 STEP: Creating a pod to test consume configMaps May 21 00:50:21.341: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac" in namespace "projected-487" to be "Succeeded or Failed" May 21 00:50:21.346: INFO: Pod "pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.797022ms May 21 00:50:23.350: INFO: Pod "pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00892377s May 21 00:50:25.354: INFO: Pod "pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012481575s STEP: Saw pod success May 21 00:50:25.354: INFO: Pod "pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac" satisfied condition "Succeeded or Failed" May 21 00:50:25.356: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac container projected-configmap-volume-test: STEP: delete the pod May 21 00:50:25.432: INFO: Waiting for pod pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac to disappear May 21 00:50:25.442: INFO: Pod pod-projected-configmaps-cc08f196-0d07-4325-9bc7-ab7a4555eaac no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:50:25.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-487" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":225,"skipped":3832,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:50:25.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9805d028-ac3d-4296-a76d-4834227722d6 STEP: Creating a pod to test consume configMaps May 21 00:50:25.610: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642" in namespace "projected-7465" to be "Succeeded or Failed" May 21 00:50:25.645: INFO: Pod "pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642": Phase="Pending", Reason="", readiness=false. Elapsed: 35.190361ms May 21 00:50:27.652: INFO: Pod "pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042075713s May 21 00:50:29.664: INFO: Pod "pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054137509s May 21 00:50:31.669: INFO: Pod "pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058993898s STEP: Saw pod success May 21 00:50:31.669: INFO: Pod "pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642" satisfied condition "Succeeded or Failed" May 21 00:50:31.672: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642 container projected-configmap-volume-test: STEP: delete the pod May 21 00:50:31.696: INFO: Waiting for pod pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642 to disappear May 21 00:50:31.719: INFO: Pod pod-projected-configmaps-91ae950b-86ea-4486-8f29-9eeec3866642 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:50:31.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7465" for this suite. • [SLOW TEST:6.278 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":226,"skipped":3849,"failed":0} SSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:50:31.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 21 00:50:38.348: INFO: Successfully updated pod "adopt-release-jtprn" STEP: Checking that the Job readopts the Pod May 21 00:50:38.348: INFO: Waiting up to 15m0s for pod "adopt-release-jtprn" in namespace "job-2267" to be "adopted" May 21 00:50:38.368: INFO: Pod "adopt-release-jtprn": Phase="Running", Reason="", readiness=true. Elapsed: 20.672369ms May 21 00:50:40.373: INFO: Pod "adopt-release-jtprn": Phase="Running", Reason="", readiness=true. Elapsed: 2.024966684s May 21 00:50:40.373: INFO: Pod "adopt-release-jtprn" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 21 00:50:40.882: INFO: Successfully updated pod "adopt-release-jtprn" STEP: Checking that the Job releases the Pod May 21 00:50:40.882: INFO: Waiting up to 15m0s for pod "adopt-release-jtprn" in namespace "job-2267" to be "released" May 21 00:50:40.929: INFO: Pod "adopt-release-jtprn": Phase="Running", Reason="", readiness=true. Elapsed: 47.315721ms May 21 00:50:42.933: INFO: Pod "adopt-release-jtprn": Phase="Running", Reason="", readiness=true. Elapsed: 2.051566912s May 21 00:50:42.933: INFO: Pod "adopt-release-jtprn" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:50:42.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2267" for this suite. • [SLOW TEST:11.215 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":227,"skipped":3854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:50:42.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 21 00:50:43.327: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:50:43.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1683" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":228,"skipped":3881,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:50:43.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-dd1285e2-b3fb-4fc0-9dff-7e9f5ab8d11d STEP: Creating a pod to test consume secrets May 21 00:50:43.756: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458" in namespace "projected-5719" to be "Succeeded or Failed" May 21 00:50:43.790: INFO: Pod "pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458": Phase="Pending", Reason="", readiness=false. Elapsed: 33.685774ms May 21 00:50:45.880: INFO: Pod "pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123353206s May 21 00:50:47.885: INFO: Pod "pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129044544s STEP: Saw pod success May 21 00:50:47.885: INFO: Pod "pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458" satisfied condition "Succeeded or Failed" May 21 00:50:47.888: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458 container projected-secret-volume-test: STEP: delete the pod May 21 00:50:48.057: INFO: Waiting for pod pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458 to disappear May 21 00:50:48.071: INFO: Pod pod-projected-secrets-c3148b81-74fc-4bcd-914e-d5558fe08458 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:50:48.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5719" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":229,"skipped":3884,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:50:48.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 21 00:50:48.148: INFO: Waiting up to 1m0s for all nodes to be ready May 21 00:51:48.173: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:51:48.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 21 00:51:52.295: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:52:08.546: INFO: pods created so far: [1 1 1] May 21 00:52:08.546: INFO: length of pods created so far: 3 May 21 00:52:16.556: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:23.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-523" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:23.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9300" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:95.590 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":230,"skipped":3888,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:52:23.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:52:23.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f" in namespace "downward-api-1388" to be "Succeeded or Failed" May 21 00:52:23.821: INFO: Pod "downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.460155ms May 21 00:52:25.824: INFO: Pod "downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042744376s May 21 00:52:27.830: INFO: Pod "downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04795887s STEP: Saw pod success May 21 00:52:27.830: INFO: Pod "downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f" satisfied condition "Succeeded or Failed" May 21 00:52:27.833: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f container client-container: STEP: delete the pod May 21 00:52:27.878: INFO: Waiting for pod downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f to disappear May 21 00:52:27.934: INFO: Pod downwardapi-volume-dd6e6f53-fe68-4c94-b56c-2af65a63a97f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:27.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1388" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3892,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:52:27.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 21 00:52:28.396: INFO: Waiting up to 5m0s for pod "pod-f478b548-8afe-4715-ad12-f83d626871e7" in namespace "emptydir-4124" to be "Succeeded or Failed" May 21 00:52:28.404: INFO: Pod "pod-f478b548-8afe-4715-ad12-f83d626871e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.462986ms May 21 00:52:30.786: INFO: Pod "pod-f478b548-8afe-4715-ad12-f83d626871e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390834888s May 21 00:52:32.799: INFO: Pod "pod-f478b548-8afe-4715-ad12-f83d626871e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.403903017s STEP: Saw pod success May 21 00:52:32.800: INFO: Pod "pod-f478b548-8afe-4715-ad12-f83d626871e7" satisfied condition "Succeeded or Failed" May 21 00:52:32.803: INFO: Trying to get logs from node latest-worker2 pod pod-f478b548-8afe-4715-ad12-f83d626871e7 container test-container: STEP: delete the pod May 21 00:52:32.842: INFO: Waiting for pod pod-f478b548-8afe-4715-ad12-f83d626871e7 to disappear May 21 00:52:32.852: INFO: Pod pod-f478b548-8afe-4715-ad12-f83d626871e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:32.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4124" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":232,"skipped":3903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:52:32.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-cda1ab14-ca86-4fa5-bcf3-c50c3b13945c STEP: Creating a pod to test consume configMaps May 21 00:52:33.004: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1" in namespace "projected-4633" to be "Succeeded or Failed" May 21 00:52:33.163: INFO: Pod "pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 158.628057ms May 21 00:52:35.168: INFO: Pod "pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163044586s May 21 00:52:37.171: INFO: Pod "pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.166645472s STEP: Saw pod success May 21 00:52:37.171: INFO: Pod "pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1" satisfied condition "Succeeded or Failed" May 21 00:52:37.174: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1 container projected-configmap-volume-test: STEP: delete the pod May 21 00:52:37.216: INFO: Waiting for pod pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1 to disappear May 21 00:52:37.219: INFO: Pod pod-projected-configmaps-673a7b1d-90f9-430f-bafc-bc54d3b3c0e1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:37.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4633" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":233,"skipped":3934,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:52:37.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 21 00:52:37.306: INFO: Waiting up to 5m0s for pod "pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba" in namespace "emptydir-2546" to be "Succeeded or Failed" May 21 00:52:37.309: INFO: Pod "pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.257246ms May 21 00:52:39.313: INFO: Pod "pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007796369s May 21 00:52:41.318: INFO: Pod "pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012451463s STEP: Saw pod success May 21 00:52:41.318: INFO: Pod "pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba" satisfied condition "Succeeded or Failed" May 21 00:52:41.322: INFO: Trying to get logs from node latest-worker2 pod pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba container test-container: STEP: delete the pod May 21 00:52:41.354: INFO: Waiting for pod pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba to disappear May 21 00:52:41.373: INFO: Pod pod-9f3f8495-1170-455c-9a6c-95f9bfa933ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:41.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2546" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":234,"skipped":3948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:52:41.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 21 00:52:41.520: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-859 /api/v1/namespaces/watch-859/configmaps/e2e-watch-test-label-changed 46323d33-6363-46b4-895f-f9ca8b65b0cf 6365856 0 2020-05-21 00:52:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-21 00:52:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:52:41.520: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-859 /api/v1/namespaces/watch-859/configmaps/e2e-watch-test-label-changed 46323d33-6363-46b4-895f-f9ca8b65b0cf 6365857 0 2020-05-21 00:52:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-21 00:52:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:52:41.520: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-859 /api/v1/namespaces/watch-859/configmaps/e2e-watch-test-label-changed 46323d33-6363-46b4-895f-f9ca8b65b0cf 6365858 0 2020-05-21 00:52:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-21 00:52:41 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 21 00:52:51.598: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-859 /api/v1/namespaces/watch-859/configmaps/e2e-watch-test-label-changed 46323d33-6363-46b4-895f-f9ca8b65b0cf 6365906 0 2020-05-21 00:52:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-21 00:52:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:52:51.598: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-859 /api/v1/namespaces/watch-859/configmaps/e2e-watch-test-label-changed 46323d33-6363-46b4-895f-f9ca8b65b0cf 6365907 0 2020-05-21 00:52:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-21 00:52:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 21 00:52:51.598: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-859 /api/v1/namespaces/watch-859/configmaps/e2e-watch-test-label-changed 46323d33-6363-46b4-895f-f9ca8b65b0cf 6365908 0 2020-05-21 00:52:41 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-21 00:52:51 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:51.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-859" for this suite. • [SLOW TEST:10.230 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":235,"skipped":3977,"failed":0} [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:52:51.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 00:52:51.725: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3" in namespace "projected-1872" to be "Succeeded or Failed" May 21 00:52:51.728: INFO: Pod "downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.30103ms May 21 00:52:53.866: INFO: Pod "downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140976009s May 21 00:52:55.876: INFO: Pod "downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150786502s STEP: Saw pod success May 21 00:52:55.876: INFO: Pod "downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3" satisfied condition "Succeeded or Failed" May 21 00:52:55.878: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3 container client-container: STEP: delete the pod May 21 00:52:55.924: INFO: Waiting for pod downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3 to disappear May 21 00:52:55.946: INFO: Pod downwardapi-volume-01fca67c-e87e-4225-91c1-c23c42db2fb3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:52:55.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1872" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3977,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:52:55.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:52:56.022: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 21 00:53:01.061: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 21 00:53:01.061: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 21 00:53:01.091: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9624 /apis/apps/v1/namespaces/deployment-9624/deployments/test-cleanup-deployment e1b1ff0e-cc2f-4843-afe3-da22855fcd18 6365982 1 2020-05-21 00:53:01 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-21 00:53:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031f2e58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 21 00:53:01.093: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 21 00:53:01.093: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 21 00:53:01.094: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9624 /apis/apps/v1/namespaces/deployment-9624/replicasets/test-cleanup-controller 05ea64ba-50ca-411d-8ca6-f6d5e679ad24 6365983 1 2020-05-21 00:52:56 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment e1b1ff0e-cc2f-4843-afe3-da22855fcd18 0xc0031f31ef 0xc0031f3200}] [] [{e2e.test Update apps/v1 2020-05-21 00:52:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-21 00:53:01 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"e1b1ff0e-cc2f-4843-afe3-da22855fcd18\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0031f3298 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 21 00:53:01.114: INFO: Pod "test-cleanup-controller-dl75t" is available: &Pod{ObjectMeta:{test-cleanup-controller-dl75t test-cleanup-controller- deployment-9624 /api/v1/namespaces/deployment-9624/pods/test-cleanup-controller-dl75t 72b5e486-8399-43fb-9175-927a79449eed 6365969 0 2020-05-21 00:52:56 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 05ea64ba-50ca-411d-8ca6-f6d5e679ad24 0xc0031f3877 0xc0031f3878}] [] [{kube-controller-manager Update v1 2020-05-21 00:52:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"05ea64ba-50ca-411d-8ca6-f6d5e679ad24\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-21 00:52:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.235\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cprvn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cprvn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cprvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 00:52:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 00:52:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 00:52:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 00:52:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.235,StartTime:2020-05-21 00:52:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-21 00:52:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a6d364de9632a92ac177b110b377bcc0aff745a9452b8e8ff0d8cd9766007e20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:01.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9624" for this suite. • [SLOW TEST:5.274 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":237,"skipped":3985,"failed":0} S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:01.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 21 00:53:05.919: INFO: Successfully updated pod "pod-update-0908339f-5de7-4850-a016-21a1d06bc239" STEP: verifying the updated pod is in kubernetes May 21 00:53:05.947: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:05.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9675" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":3986,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:05.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-7364 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7364 to expose endpoints map[] May 21 00:53:06.100: INFO: Get endpoints failed (16.09667ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 21 00:53:07.104: INFO: successfully validated that service endpoint-test2 in namespace services-7364 exposes endpoints map[] (1.019733086s elapsed) STEP: Creating pod pod1 in namespace services-7364 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7364 to expose endpoints map[pod1:[80]] May 21 00:53:11.330: INFO: successfully validated that service endpoint-test2 in namespace services-7364 exposes endpoints map[pod1:[80]] (4.217656077s elapsed) STEP: Creating pod pod2 in namespace services-7364 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7364 to expose endpoints map[pod1:[80] pod2:[80]] May 21 00:53:14.681: INFO: successfully validated that service endpoint-test2 in namespace services-7364 exposes endpoints map[pod1:[80] pod2:[80]] (3.346745785s elapsed) STEP: Deleting pod pod1 in namespace services-7364 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7364 to expose endpoints map[pod2:[80]] May 21 00:53:15.794: INFO: successfully validated that service endpoint-test2 in namespace services-7364 exposes endpoints map[pod2:[80]] (1.108250468s elapsed) STEP: Deleting pod pod2 in namespace services-7364 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7364 to expose endpoints map[] May 21 00:53:16.814: INFO: successfully validated that service endpoint-test2 in namespace services-7364 exposes endpoints map[] (1.01414429s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:16.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7364" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.956 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":239,"skipped":4001,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:16.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0521 00:53:29.571042 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 21 00:53:29.571: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:29.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3486" for this suite. • [SLOW TEST:12.667 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":240,"skipped":4033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:29.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:33.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2250" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":241,"skipped":4076,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:33.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-0e0f33ca-320f-40b2-9d4f-c3dbd5e629a6 STEP: Creating a pod to test consume secrets May 21 00:53:34.171: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25" in namespace "projected-6888" to be "Succeeded or Failed" May 21 00:53:34.199: INFO: Pod "pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25": Phase="Pending", Reason="", readiness=false. Elapsed: 28.312671ms May 21 00:53:36.246: INFO: Pod "pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075000384s May 21 00:53:38.264: INFO: Pod "pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09269715s May 21 00:53:40.268: INFO: Pod "pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.097234062s STEP: Saw pod success May 21 00:53:40.268: INFO: Pod "pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25" satisfied condition "Succeeded or Failed" May 21 00:53:40.272: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25 container secret-volume-test: STEP: delete the pod May 21 00:53:40.337: INFO: Waiting for pod pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25 to disappear May 21 00:53:40.342: INFO: Pod pod-projected-secrets-26122f77-b2ae-4a16-84c4-17bce03d6f25 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:40.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6888" for this suite. • [SLOW TEST:6.545 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":242,"skipped":4080,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:40.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-067ee8d1-3d5f-4785-936b-05e47dd6bfc5 STEP: Creating a pod to test consume secrets May 21 00:53:40.451: INFO: Waiting up to 5m0s for pod "pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980" in namespace "secrets-8993" to be "Succeeded or Failed" May 21 00:53:40.498: INFO: Pod "pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980": Phase="Pending", Reason="", readiness=false. Elapsed: 47.115336ms May 21 00:53:42.502: INFO: Pod "pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051261449s May 21 00:53:44.507: INFO: Pod "pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055702245s STEP: Saw pod success May 21 00:53:44.507: INFO: Pod "pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980" satisfied condition "Succeeded or Failed" May 21 00:53:44.510: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980 container secret-volume-test: STEP: delete the pod May 21 00:53:44.571: INFO: Waiting for pod pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980 to disappear May 21 00:53:44.579: INFO: Pod pod-secrets-6336bc24-2482-4fe0-bd63-307abf75c980 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:44.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8993" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":4083,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:44.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:53:44.642: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 21 00:53:46.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5099 create -f -' May 21 00:53:49.788: INFO: stderr: "" May 21 00:53:49.788: INFO: stdout: "e2e-test-crd-publish-openapi-8683-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 21 00:53:49.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5099 delete e2e-test-crd-publish-openapi-8683-crds test-cr' May 21 00:53:49.891: INFO: stderr: "" May 21 00:53:49.891: INFO: stdout: "e2e-test-crd-publish-openapi-8683-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 21 00:53:49.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5099 apply -f -' May 21 00:53:50.185: INFO: stderr: "" May 21 00:53:50.185: INFO: stdout: "e2e-test-crd-publish-openapi-8683-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 21 00:53:50.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5099 delete e2e-test-crd-publish-openapi-8683-crds test-cr' May 21 00:53:50.299: INFO: stderr: "" May 21 00:53:50.299: INFO: stdout: "e2e-test-crd-publish-openapi-8683-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 21 00:53:50.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8683-crds' May 21 00:53:50.536: INFO: stderr: "" May 21 00:53:50.536: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8683-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:53:53.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5099" for this suite. • [SLOW TEST:8.871 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":244,"skipped":4092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:53:53.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 21 00:53:53.613: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:53.616: INFO: Number of nodes with available pods: 0 May 21 00:53:53.616: INFO: Node latest-worker is running more than one daemon pod May 21 00:53:54.621: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:54.625: INFO: Number of nodes with available pods: 0 May 21 00:53:54.625: INFO: Node latest-worker is running more than one daemon pod May 21 00:53:55.830: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:56.231: INFO: Number of nodes with available pods: 0 May 21 00:53:56.231: INFO: Node latest-worker is running more than one daemon pod May 21 00:53:56.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:56.623: INFO: Number of nodes with available pods: 0 May 21 00:53:56.623: INFO: Node latest-worker is running more than one daemon pod May 21 00:53:57.625: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:57.630: INFO: Number of nodes with available pods: 0 May 21 00:53:57.630: INFO: Node latest-worker is running more than one daemon pod May 21 00:53:58.620: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:58.624: INFO: Number of nodes with available pods: 2 May 21 00:53:58.624: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 21 00:53:58.657: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:58.672: INFO: Number of nodes with available pods: 1 May 21 00:53:58.672: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:53:59.676: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:53:59.702: INFO: Number of nodes with available pods: 1 May 21 00:53:59.702: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:00.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:00.682: INFO: Number of nodes with available pods: 1 May 21 00:54:00.682: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:01.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:01.682: INFO: Number of nodes with available pods: 1 May 21 00:54:01.682: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:02.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:02.682: INFO: Number of nodes with available pods: 1 May 21 00:54:02.682: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:03.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:03.682: INFO: Number of nodes with available pods: 1 May 21 00:54:03.682: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:04.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:04.682: INFO: Number of nodes with available pods: 1 May 21 00:54:04.682: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:05.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:05.691: INFO: Number of nodes with available pods: 1 May 21 00:54:05.691: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:07.032: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:07.036: INFO: Number of nodes with available pods: 1 May 21 00:54:07.036: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:07.677: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:07.681: INFO: Number of nodes with available pods: 1 May 21 00:54:07.681: INFO: Node latest-worker2 is running more than one daemon pod May 21 00:54:08.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 21 00:54:08.682: INFO: Number of nodes with available pods: 2 May 21 00:54:08.682: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1567, will wait for the garbage collector to delete the pods May 21 00:54:08.749: INFO: Deleting DaemonSet.extensions daemon-set took: 7.3782ms May 21 00:54:09.050: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.231042ms May 21 00:54:14.953: INFO: Number of nodes with available pods: 0 May 21 00:54:14.954: INFO: Number of running nodes: 0, number of available pods: 0 May 21 00:54:14.956: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1567/daemonsets","resourceVersion":"6366671"},"items":null} May 21 00:54:14.959: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1567/pods","resourceVersion":"6366671"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:54:14.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1567" for this suite. • [SLOW TEST:21.516 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":245,"skipped":4131,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:54:14.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:54:15.618: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:54:17.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619255, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619255, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619255, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619255, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:54:20.699: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:54:20.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5831" for this suite. STEP: Destroying namespace "webhook-5831-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.965 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":246,"skipped":4141,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:54:20.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 00:56:21.069: INFO: Deleting pod "var-expansion-abb3a438-5fe9-4cec-8142-06b7cc0a3f10" in namespace "var-expansion-1440" May 21 00:56:21.074: INFO: Wait up to 5m0s for pod "var-expansion-abb3a438-5fe9-4cec-8142-06b7cc0a3f10" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:56:25.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1440" for this suite. • [SLOW TEST:124.163 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":247,"skipped":4152,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:56:25.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7148 STEP: creating a selector STEP: Creating the service pods in kubernetes May 21 00:56:25.172: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 21 00:56:25.326: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 00:56:27.331: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 21 00:56:29.332: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:56:31.331: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:56:33.331: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:56:35.331: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:56:37.331: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:56:39.331: INFO: The status of Pod netserver-0 is Running (Ready = false) May 21 00:56:41.331: INFO: The status of Pod netserver-0 is Running (Ready = true) May 21 00:56:41.338: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 21 00:56:45.366: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.247:8080/dial?request=hostname&protocol=udp&host=10.244.1.241&port=8081&tries=1'] Namespace:pod-network-test-7148 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:56:45.366: INFO: >>> kubeConfig: /root/.kube/config I0521 00:56:45.400161 8 log.go:172] (0xc0023704d0) (0xc000c4c0a0) Create stream I0521 00:56:45.400192 8 log.go:172] (0xc0023704d0) (0xc000c4c0a0) Stream added, broadcasting: 1 I0521 00:56:45.402064 8 log.go:172] (0xc0023704d0) Reply frame received for 1 I0521 00:56:45.402096 8 log.go:172] (0xc0023704d0) (0xc00124de00) Create stream I0521 00:56:45.402103 8 log.go:172] (0xc0023704d0) (0xc00124de00) Stream added, broadcasting: 3 I0521 00:56:45.402948 8 log.go:172] (0xc0023704d0) Reply frame received for 3 I0521 00:56:45.403004 8 log.go:172] (0xc0023704d0) (0xc0013a03c0) Create stream I0521 00:56:45.403038 8 log.go:172] (0xc0023704d0) (0xc0013a03c0) Stream added, broadcasting: 5 I0521 00:56:45.403849 8 log.go:172] (0xc0023704d0) Reply frame received for 5 I0521 00:56:45.479049 8 log.go:172] (0xc0023704d0) Data frame received for 3 I0521 00:56:45.479079 8 log.go:172] (0xc00124de00) (3) Data frame handling I0521 00:56:45.479094 8 log.go:172] (0xc00124de00) (3) Data frame sent I0521 00:56:45.479567 8 log.go:172] (0xc0023704d0) Data frame received for 5 I0521 00:56:45.479595 8 log.go:172] (0xc0013a03c0) (5) Data frame handling I0521 00:56:45.479954 8 log.go:172] (0xc0023704d0) Data frame received for 3 I0521 00:56:45.480006 8 log.go:172] (0xc00124de00) (3) Data frame handling I0521 00:56:45.482930 8 log.go:172] (0xc0023704d0) Data frame received for 1 I0521 00:56:45.482967 8 log.go:172] (0xc000c4c0a0) (1) Data frame handling I0521 00:56:45.483001 8 log.go:172] (0xc000c4c0a0) (1) Data frame sent I0521 00:56:45.483034 8 log.go:172] (0xc0023704d0) (0xc000c4c0a0) Stream removed, broadcasting: 1 I0521 00:56:45.483060 8 log.go:172] (0xc0023704d0) Go away received I0521 00:56:45.483219 8 log.go:172] (0xc0023704d0) (0xc000c4c0a0) Stream removed, broadcasting: 1 I0521 00:56:45.483241 8 log.go:172] (0xc0023704d0) (0xc00124de00) Stream removed, broadcasting: 3 I0521 00:56:45.483250 8 log.go:172] (0xc0023704d0) (0xc0013a03c0) Stream removed, broadcasting: 5 May 21 00:56:45.483: INFO: Waiting for responses: map[] May 21 00:56:45.486: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.247:8080/dial?request=hostname&protocol=udp&host=10.244.2.246&port=8081&tries=1'] Namespace:pod-network-test-7148 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 21 00:56:45.486: INFO: >>> kubeConfig: /root/.kube/config I0521 00:56:45.515100 8 log.go:172] (0xc002370a50) (0xc000c4cc80) Create stream I0521 00:56:45.515124 8 log.go:172] (0xc002370a50) (0xc000c4cc80) Stream added, broadcasting: 1 I0521 00:56:45.517767 8 log.go:172] (0xc002370a50) Reply frame received for 1 I0521 00:56:45.517799 8 log.go:172] (0xc002370a50) (0xc0013a0960) Create stream I0521 00:56:45.517810 8 log.go:172] (0xc002370a50) (0xc0013a0960) Stream added, broadcasting: 3 I0521 00:56:45.518865 8 log.go:172] (0xc002370a50) Reply frame received for 3 I0521 00:56:45.518910 8 log.go:172] (0xc002370a50) (0xc0013a0c80) Create stream I0521 00:56:45.518933 8 log.go:172] (0xc002370a50) (0xc0013a0c80) Stream added, broadcasting: 5 I0521 00:56:45.520142 8 log.go:172] (0xc002370a50) Reply frame received for 5 I0521 00:56:45.588692 8 log.go:172] (0xc002370a50) Data frame received for 3 I0521 00:56:45.588730 8 log.go:172] (0xc0013a0960) (3) Data frame handling I0521 00:56:45.588755 8 log.go:172] (0xc0013a0960) (3) Data frame sent I0521 00:56:45.589007 8 log.go:172] (0xc002370a50) Data frame received for 3 I0521 00:56:45.589041 8 log.go:172] (0xc0013a0960) (3) Data frame handling I0521 00:56:45.589072 8 log.go:172] (0xc002370a50) Data frame received for 5 I0521 00:56:45.589093 8 log.go:172] (0xc0013a0c80) (5) Data frame handling I0521 00:56:45.591339 8 log.go:172] (0xc002370a50) Data frame received for 1 I0521 00:56:45.591358 8 log.go:172] (0xc000c4cc80) (1) Data frame handling I0521 00:56:45.591377 8 log.go:172] (0xc000c4cc80) (1) Data frame sent I0521 00:56:45.591471 8 log.go:172] (0xc002370a50) (0xc000c4cc80) Stream removed, broadcasting: 1 I0521 00:56:45.591584 8 log.go:172] (0xc002370a50) (0xc000c4cc80) Stream removed, broadcasting: 1 I0521 00:56:45.591611 8 log.go:172] (0xc002370a50) (0xc0013a0960) Stream removed, broadcasting: 3 I0521 00:56:45.591708 8 log.go:172] (0xc002370a50) Go away received I0521 00:56:45.591829 8 log.go:172] (0xc002370a50) (0xc0013a0c80) Stream removed, broadcasting: 5 May 21 00:56:45.591: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:56:45.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7148" for this suite. • [SLOW TEST:20.496 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":248,"skipped":4164,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:56:45.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3322 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3322 STEP: Creating statefulset with conflicting port in namespace statefulset-3322 STEP: Waiting until pod test-pod will start running in namespace statefulset-3322 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3322 May 21 00:56:49.865: INFO: Observed stateful pod in namespace: statefulset-3322, name: ss-0, uid: 306f626e-23a0-4e59-bc8f-405bd9b13e3d, status phase: Pending. Waiting for statefulset controller to delete. May 21 00:56:50.336: INFO: Observed stateful pod in namespace: statefulset-3322, name: ss-0, uid: 306f626e-23a0-4e59-bc8f-405bd9b13e3d, status phase: Failed. Waiting for statefulset controller to delete. May 21 00:56:50.343: INFO: Observed stateful pod in namespace: statefulset-3322, name: ss-0, uid: 306f626e-23a0-4e59-bc8f-405bd9b13e3d, status phase: Failed. Waiting for statefulset controller to delete. May 21 00:56:50.360: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3322 STEP: Removing pod with conflicting port in namespace statefulset-3322 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3322 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 00:56:56.484: INFO: Deleting all statefulset in ns statefulset-3322 May 21 00:56:56.486: INFO: Scaling statefulset ss to 0 May 21 00:57:06.502: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:57:06.504: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:57:06.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3322" for this suite. • [SLOW TEST:21.021 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":249,"skipped":4166,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:57:06.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 00:57:07.083: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 00:57:09.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 00:57:11.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619427, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 00:57:14.174: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:57:14.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-733" for this suite. STEP: Destroying namespace "webhook-733-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.907 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":250,"skipped":4169,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:57:14.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:57:14.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6857" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":251,"skipped":4185,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:57:14.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4341.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4341.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 00:57:21.436: INFO: DNS probes using dns-4341/dns-test-73e24496-00e8-414e-9303-16888bb7a6c5 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 00:57:21.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4341" for this suite. • [SLOW TEST:6.866 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":252,"skipped":4188,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 00:57:21.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1494 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-1494 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1494 May 21 00:57:21.883: INFO: Found 0 stateful pods, waiting for 1 May 21 00:57:31.887: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 21 00:57:31.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:57:32.191: INFO: stderr: "I0521 00:57:32.033350 3550 log.go:172] (0xc000b51340) (0xc000758fa0) Create stream\nI0521 00:57:32.033418 3550 log.go:172] (0xc000b51340) (0xc000758fa0) Stream added, broadcasting: 1\nI0521 00:57:32.038536 3550 log.go:172] (0xc000b51340) Reply frame received for 1\nI0521 00:57:32.038567 3550 log.go:172] (0xc000b51340) (0xc000713c20) Create stream\nI0521 00:57:32.038575 3550 log.go:172] (0xc000b51340) (0xc000713c20) Stream added, broadcasting: 3\nI0521 00:57:32.039564 3550 log.go:172] (0xc000b51340) Reply frame received for 3\nI0521 00:57:32.039598 3550 log.go:172] (0xc000b51340) (0xc0006e4d20) Create stream\nI0521 00:57:32.039608 3550 log.go:172] (0xc000b51340) (0xc0006e4d20) Stream added, broadcasting: 5\nI0521 00:57:32.040422 3550 log.go:172] (0xc000b51340) Reply frame received for 5\nI0521 00:57:32.122349 3550 log.go:172] (0xc000b51340) Data frame received for 5\nI0521 00:57:32.122378 3550 log.go:172] (0xc0006e4d20) (5) Data frame handling\nI0521 00:57:32.122401 3550 log.go:172] (0xc0006e4d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:57:32.183504 3550 log.go:172] (0xc000b51340) Data frame received for 3\nI0521 00:57:32.183546 3550 log.go:172] (0xc000713c20) (3) Data frame handling\nI0521 00:57:32.183561 3550 log.go:172] (0xc000713c20) (3) Data frame sent\nI0521 00:57:32.183572 3550 log.go:172] (0xc000b51340) Data frame received for 3\nI0521 00:57:32.183581 3550 log.go:172] (0xc000713c20) (3) Data frame handling\nI0521 00:57:32.183652 3550 log.go:172] (0xc000b51340) Data frame received for 5\nI0521 00:57:32.183693 3550 log.go:172] (0xc0006e4d20) (5) Data frame handling\nI0521 00:57:32.185521 3550 log.go:172] (0xc000b51340) Data frame received for 1\nI0521 00:57:32.185554 3550 log.go:172] (0xc000758fa0) (1) Data frame handling\nI0521 00:57:32.185580 3550 log.go:172] (0xc000758fa0) (1) Data frame sent\nI0521 00:57:32.185599 3550 log.go:172] (0xc000b51340) (0xc000758fa0) Stream removed, broadcasting: 1\nI0521 00:57:32.185614 3550 log.go:172] (0xc000b51340) Go away received\nI0521 00:57:32.186101 3550 log.go:172] (0xc000b51340) (0xc000758fa0) Stream removed, broadcasting: 1\nI0521 00:57:32.186123 3550 log.go:172] (0xc000b51340) (0xc000713c20) Stream removed, broadcasting: 3\nI0521 00:57:32.186139 3550 log.go:172] (0xc000b51340) (0xc0006e4d20) Stream removed, broadcasting: 5\n" May 21 00:57:32.191: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:57:32.191: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:57:32.213: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 21 00:57:42.218: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 00:57:42.218: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:57:42.233: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:57:42.233: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:32 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:57:42.233: INFO: May 21 00:57:42.233: INFO: StatefulSet ss has not reached scale 3, at 1 May 21 00:57:43.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9970807s May 21 00:57:44.242: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992250155s May 21 00:57:45.309: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987424484s May 21 00:57:46.376: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.920643355s May 21 00:57:47.380: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.853820617s May 21 00:57:48.385: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.849722313s May 21 00:57:49.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.844857576s May 21 00:57:50.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.831534201s May 21 00:57:51.409: INFO: Verifying statefulset ss doesn't scale past 3 for another 826.520284ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1494 May 21 00:57:52.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:57:52.661: INFO: stderr: "I0521 00:57:52.560422 3571 log.go:172] (0xc00003b810) (0xc000a4e6e0) Create stream\nI0521 00:57:52.560471 3571 log.go:172] (0xc00003b810) (0xc000a4e6e0) Stream added, broadcasting: 1\nI0521 00:57:52.563930 3571 log.go:172] (0xc00003b810) Reply frame received for 1\nI0521 00:57:52.563978 3571 log.go:172] (0xc00003b810) (0xc00084ad20) Create stream\nI0521 00:57:52.563995 3571 log.go:172] (0xc00003b810) (0xc00084ad20) Stream added, broadcasting: 3\nI0521 00:57:52.564788 3571 log.go:172] (0xc00003b810) Reply frame received for 3\nI0521 00:57:52.564818 3571 log.go:172] (0xc00003b810) (0xc0008445a0) Create stream\nI0521 00:57:52.564830 3571 log.go:172] (0xc00003b810) (0xc0008445a0) Stream added, broadcasting: 5\nI0521 00:57:52.565883 3571 log.go:172] (0xc00003b810) Reply frame received for 5\nI0521 00:57:52.655645 3571 log.go:172] (0xc00003b810) Data frame received for 3\nI0521 00:57:52.655689 3571 log.go:172] (0xc00084ad20) (3) Data frame handling\nI0521 00:57:52.655704 3571 log.go:172] (0xc00084ad20) (3) Data frame sent\nI0521 00:57:52.655712 3571 log.go:172] (0xc00003b810) Data frame received for 3\nI0521 00:57:52.655717 3571 log.go:172] (0xc00084ad20) (3) Data frame handling\nI0521 00:57:52.655739 3571 log.go:172] (0xc00003b810) Data frame received for 5\nI0521 00:57:52.655745 3571 log.go:172] (0xc0008445a0) (5) Data frame handling\nI0521 00:57:52.655750 3571 log.go:172] (0xc0008445a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0521 00:57:52.655883 3571 log.go:172] (0xc00003b810) Data frame received for 5\nI0521 00:57:52.655916 3571 log.go:172] (0xc0008445a0) (5) Data frame handling\nI0521 00:57:52.657240 3571 log.go:172] (0xc00003b810) Data frame received for 1\nI0521 00:57:52.657283 3571 log.go:172] (0xc000a4e6e0) (1) Data frame handling\nI0521 00:57:52.657290 3571 log.go:172] (0xc000a4e6e0) (1) Data frame sent\nI0521 00:57:52.657298 3571 log.go:172] (0xc00003b810) (0xc000a4e6e0) Stream removed, broadcasting: 1\nI0521 00:57:52.657552 3571 log.go:172] (0xc00003b810) (0xc000a4e6e0) Stream removed, broadcasting: 1\nI0521 00:57:52.657572 3571 log.go:172] (0xc00003b810) (0xc00084ad20) Stream removed, broadcasting: 3\nI0521 00:57:52.657582 3571 log.go:172] (0xc00003b810) (0xc0008445a0) Stream removed, broadcasting: 5\n" May 21 00:57:52.661: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:57:52.661: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:57:52.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:57:52.882: INFO: stderr: "I0521 00:57:52.801871 3591 log.go:172] (0xc00097d130) (0xc000a86460) Create stream\nI0521 00:57:52.801927 3591 log.go:172] (0xc00097d130) (0xc000a86460) Stream added, broadcasting: 1\nI0521 00:57:52.804186 3591 log.go:172] (0xc00097d130) Reply frame received for 1\nI0521 00:57:52.804284 3591 log.go:172] (0xc00097d130) (0xc0005592c0) Create stream\nI0521 00:57:52.804308 3591 log.go:172] (0xc00097d130) (0xc0005592c0) Stream added, broadcasting: 3\nI0521 00:57:52.805070 3591 log.go:172] (0xc00097d130) Reply frame received for 3\nI0521 00:57:52.805092 3591 log.go:172] (0xc00097d130) (0xc000a86500) Create stream\nI0521 00:57:52.805099 3591 log.go:172] (0xc00097d130) (0xc000a86500) Stream added, broadcasting: 5\nI0521 00:57:52.805955 3591 log.go:172] (0xc00097d130) Reply frame received for 5\nI0521 00:57:52.874206 3591 log.go:172] (0xc00097d130) Data frame received for 5\nI0521 00:57:52.874247 3591 log.go:172] (0xc000a86500) (5) Data frame handling\nI0521 00:57:52.874287 3591 log.go:172] (0xc000a86500) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0521 00:57:52.874330 3591 log.go:172] (0xc00097d130) Data frame received for 5\nI0521 00:57:52.874367 3591 log.go:172] (0xc000a86500) (5) Data frame handling\nI0521 00:57:52.874577 3591 log.go:172] (0xc00097d130) Data frame received for 3\nI0521 00:57:52.874597 3591 log.go:172] (0xc0005592c0) (3) Data frame handling\nI0521 00:57:52.874615 3591 log.go:172] (0xc0005592c0) (3) Data frame sent\nI0521 00:57:52.874943 3591 log.go:172] (0xc00097d130) Data frame received for 3\nI0521 00:57:52.874982 3591 log.go:172] (0xc0005592c0) (3) Data frame handling\nI0521 00:57:52.877072 3591 log.go:172] (0xc00097d130) Data frame received for 1\nI0521 00:57:52.877089 3591 log.go:172] (0xc000a86460) (1) Data frame handling\nI0521 00:57:52.877227 3591 log.go:172] (0xc000a86460) (1) Data frame sent\nI0521 00:57:52.877264 3591 log.go:172] (0xc00097d130) (0xc000a86460) Stream removed, broadcasting: 1\nI0521 00:57:52.877552 3591 log.go:172] (0xc00097d130) Go away received\nI0521 00:57:52.877621 3591 log.go:172] (0xc00097d130) (0xc000a86460) Stream removed, broadcasting: 1\nI0521 00:57:52.877634 3591 log.go:172] (0xc00097d130) (0xc0005592c0) Stream removed, broadcasting: 3\nI0521 00:57:52.877641 3591 log.go:172] (0xc00097d130) (0xc000a86500) Stream removed, broadcasting: 5\n" May 21 00:57:52.882: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:57:52.882: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:57:52.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:57:53.096: INFO: stderr: "I0521 00:57:53.023149 3611 log.go:172] (0xc00003a0b0) (0xc000812460) Create stream\nI0521 00:57:53.023211 3611 log.go:172] (0xc00003a0b0) (0xc000812460) Stream added, broadcasting: 1\nI0521 00:57:53.025709 3611 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0521 00:57:53.025772 3611 log.go:172] (0xc00003a0b0) (0xc00062e140) Create stream\nI0521 00:57:53.025792 3611 log.go:172] (0xc00003a0b0) (0xc00062e140) Stream added, broadcasting: 3\nI0521 00:57:53.026961 3611 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0521 00:57:53.027022 3611 log.go:172] (0xc00003a0b0) (0xc00055cc80) Create stream\nI0521 00:57:53.027045 3611 log.go:172] (0xc00003a0b0) (0xc00055cc80) Stream added, broadcasting: 5\nI0521 00:57:53.028223 3611 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0521 00:57:53.088081 3611 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0521 00:57:53.088123 3611 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0521 00:57:53.088142 3611 log.go:172] (0xc00055cc80) (5) Data frame handling\nI0521 00:57:53.088156 3611 log.go:172] (0xc00055cc80) (5) Data frame sent\nI0521 00:57:53.088170 3611 log.go:172] (0xc00003a0b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0521 00:57:53.088192 3611 log.go:172] (0xc00062e140) (3) Data frame handling\nI0521 00:57:53.088236 3611 log.go:172] (0xc00062e140) (3) Data frame sent\nI0521 00:57:53.088248 3611 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0521 00:57:53.088261 3611 log.go:172] (0xc00062e140) (3) Data frame handling\nI0521 00:57:53.088290 3611 log.go:172] (0xc00055cc80) (5) Data frame handling\nI0521 00:57:53.090186 3611 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0521 00:57:53.090212 3611 log.go:172] (0xc000812460) (1) Data frame handling\nI0521 00:57:53.090267 3611 log.go:172] (0xc000812460) (1) Data frame sent\nI0521 00:57:53.090337 3611 log.go:172] (0xc00003a0b0) (0xc000812460) Stream removed, broadcasting: 1\nI0521 00:57:53.090414 3611 log.go:172] (0xc00003a0b0) Go away received\nI0521 00:57:53.090665 3611 log.go:172] (0xc00003a0b0) (0xc000812460) Stream removed, broadcasting: 1\nI0521 00:57:53.090686 3611 log.go:172] (0xc00003a0b0) (0xc00062e140) Stream removed, broadcasting: 3\nI0521 00:57:53.090702 3611 log.go:172] (0xc00003a0b0) (0xc00055cc80) Stream removed, broadcasting: 5\n" May 21 00:57:53.096: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 21 00:57:53.096: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 21 00:57:53.099: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 21 00:58:03.105: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 21 00:58:03.105: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 21 00:58:03.105: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 21 00:58:03.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:58:03.332: INFO: stderr: "I0521 00:58:03.249488 3632 log.go:172] (0xc000a2d4a0) (0xc000678780) Create stream\nI0521 00:58:03.249554 3632 log.go:172] (0xc000a2d4a0) (0xc000678780) Stream added, broadcasting: 1\nI0521 00:58:03.253958 3632 log.go:172] (0xc000a2d4a0) Reply frame received for 1\nI0521 00:58:03.253995 3632 log.go:172] (0xc000a2d4a0) (0xc00026a320) Create stream\nI0521 00:58:03.254025 3632 log.go:172] (0xc000a2d4a0) (0xc00026a320) Stream added, broadcasting: 3\nI0521 00:58:03.254899 3632 log.go:172] (0xc000a2d4a0) Reply frame received for 3\nI0521 00:58:03.254932 3632 log.go:172] (0xc000a2d4a0) (0xc00064f680) Create stream\nI0521 00:58:03.254958 3632 log.go:172] (0xc000a2d4a0) (0xc00064f680) Stream added, broadcasting: 5\nI0521 00:58:03.255656 3632 log.go:172] (0xc000a2d4a0) Reply frame received for 5\nI0521 00:58:03.326698 3632 log.go:172] (0xc000a2d4a0) Data frame received for 5\nI0521 00:58:03.326743 3632 log.go:172] (0xc00064f680) (5) Data frame handling\nI0521 00:58:03.326758 3632 log.go:172] (0xc00064f680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:58:03.326769 3632 log.go:172] (0xc000a2d4a0) Data frame received for 5\nI0521 00:58:03.326808 3632 log.go:172] (0xc000a2d4a0) Data frame received for 3\nI0521 00:58:03.326838 3632 log.go:172] (0xc00026a320) (3) Data frame handling\nI0521 00:58:03.326866 3632 log.go:172] (0xc00026a320) (3) Data frame sent\nI0521 00:58:03.326886 3632 log.go:172] (0xc000a2d4a0) Data frame received for 3\nI0521 00:58:03.326899 3632 log.go:172] (0xc00026a320) (3) Data frame handling\nI0521 00:58:03.326926 3632 log.go:172] (0xc00064f680) (5) Data frame handling\nI0521 00:58:03.328060 3632 log.go:172] (0xc000a2d4a0) Data frame received for 1\nI0521 00:58:03.328079 3632 log.go:172] (0xc000678780) (1) Data frame handling\nI0521 00:58:03.328091 3632 log.go:172] (0xc000678780) (1) Data frame sent\nI0521 00:58:03.328105 3632 log.go:172] (0xc000a2d4a0) (0xc000678780) Stream removed, broadcasting: 1\nI0521 00:58:03.328143 3632 log.go:172] (0xc000a2d4a0) Go away received\nI0521 00:58:03.328431 3632 log.go:172] (0xc000a2d4a0) (0xc000678780) Stream removed, broadcasting: 1\nI0521 00:58:03.328445 3632 log.go:172] (0xc000a2d4a0) (0xc00026a320) Stream removed, broadcasting: 3\nI0521 00:58:03.328452 3632 log.go:172] (0xc000a2d4a0) (0xc00064f680) Stream removed, broadcasting: 5\n" May 21 00:58:03.332: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:58:03.332: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:58:03.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:58:03.573: INFO: stderr: "I0521 00:58:03.465002 3652 log.go:172] (0xc000c52c60) (0xc000269360) Create stream\nI0521 00:58:03.465054 3652 log.go:172] (0xc000c52c60) (0xc000269360) Stream added, broadcasting: 1\nI0521 00:58:03.467428 3652 log.go:172] (0xc000c52c60) Reply frame received for 1\nI0521 00:58:03.467474 3652 log.go:172] (0xc000c52c60) (0xc00030e6e0) Create stream\nI0521 00:58:03.467488 3652 log.go:172] (0xc000c52c60) (0xc00030e6e0) Stream added, broadcasting: 3\nI0521 00:58:03.468197 3652 log.go:172] (0xc000c52c60) Reply frame received for 3\nI0521 00:58:03.468232 3652 log.go:172] (0xc000c52c60) (0xc00030ee60) Create stream\nI0521 00:58:03.468248 3652 log.go:172] (0xc000c52c60) (0xc00030ee60) Stream added, broadcasting: 5\nI0521 00:58:03.468900 3652 log.go:172] (0xc000c52c60) Reply frame received for 5\nI0521 00:58:03.538841 3652 log.go:172] (0xc000c52c60) Data frame received for 5\nI0521 00:58:03.538867 3652 log.go:172] (0xc00030ee60) (5) Data frame handling\nI0521 00:58:03.538886 3652 log.go:172] (0xc00030ee60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:58:03.567182 3652 log.go:172] (0xc000c52c60) Data frame received for 3\nI0521 00:58:03.567327 3652 log.go:172] (0xc00030e6e0) (3) Data frame handling\nI0521 00:58:03.567457 3652 log.go:172] (0xc00030e6e0) (3) Data frame sent\nI0521 00:58:03.567619 3652 log.go:172] (0xc000c52c60) Data frame received for 3\nI0521 00:58:03.567647 3652 log.go:172] (0xc00030e6e0) (3) Data frame handling\nI0521 00:58:03.567691 3652 log.go:172] (0xc000c52c60) Data frame received for 5\nI0521 00:58:03.567718 3652 log.go:172] (0xc00030ee60) (5) Data frame handling\nI0521 00:58:03.569577 3652 log.go:172] (0xc000c52c60) Data frame received for 1\nI0521 00:58:03.569600 3652 log.go:172] (0xc000269360) (1) Data frame handling\nI0521 00:58:03.569620 3652 log.go:172] (0xc000269360) (1) Data frame sent\nI0521 00:58:03.569708 3652 log.go:172] (0xc000c52c60) (0xc000269360) Stream removed, broadcasting: 1\nI0521 00:58:03.569944 3652 log.go:172] (0xc000c52c60) Go away received\nI0521 00:58:03.570023 3652 log.go:172] (0xc000c52c60) (0xc000269360) Stream removed, broadcasting: 1\nI0521 00:58:03.570054 3652 log.go:172] (0xc000c52c60) (0xc00030e6e0) Stream removed, broadcasting: 3\nI0521 00:58:03.570064 3652 log.go:172] (0xc000c52c60) (0xc00030ee60) Stream removed, broadcasting: 5\n" May 21 00:58:03.573: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:58:03.573: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:58:03.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 21 00:58:03.837: INFO: stderr: "I0521 00:58:03.734579 3672 log.go:172] (0xc000564000) (0xc0005401e0) Create stream\nI0521 00:58:03.734649 3672 log.go:172] (0xc000564000) (0xc0005401e0) Stream added, broadcasting: 1\nI0521 00:58:03.736196 3672 log.go:172] (0xc000564000) Reply frame received for 1\nI0521 00:58:03.736234 3672 log.go:172] (0xc000564000) (0xc000504140) Create stream\nI0521 00:58:03.736247 3672 log.go:172] (0xc000564000) (0xc000504140) Stream added, broadcasting: 3\nI0521 00:58:03.737060 3672 log.go:172] (0xc000564000) Reply frame received for 3\nI0521 00:58:03.737095 3672 log.go:172] (0xc000564000) (0xc0004c8d20) Create stream\nI0521 00:58:03.737105 3672 log.go:172] (0xc000564000) (0xc0004c8d20) Stream added, broadcasting: 5\nI0521 00:58:03.738117 3672 log.go:172] (0xc000564000) Reply frame received for 5\nI0521 00:58:03.802724 3672 log.go:172] (0xc000564000) Data frame received for 5\nI0521 00:58:03.802754 3672 log.go:172] (0xc0004c8d20) (5) Data frame handling\nI0521 00:58:03.802773 3672 log.go:172] (0xc0004c8d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0521 00:58:03.830379 3672 log.go:172] (0xc000564000) Data frame received for 3\nI0521 00:58:03.830427 3672 log.go:172] (0xc000504140) (3) Data frame handling\nI0521 00:58:03.830471 3672 log.go:172] (0xc000504140) (3) Data frame sent\nI0521 00:58:03.830517 3672 log.go:172] (0xc000564000) Data frame received for 3\nI0521 00:58:03.830533 3672 log.go:172] (0xc000504140) (3) Data frame handling\nI0521 00:58:03.830692 3672 log.go:172] (0xc000564000) Data frame received for 5\nI0521 00:58:03.830722 3672 log.go:172] (0xc0004c8d20) (5) Data frame handling\nI0521 00:58:03.832420 3672 log.go:172] (0xc000564000) Data frame received for 1\nI0521 00:58:03.832448 3672 log.go:172] (0xc0005401e0) (1) Data frame handling\nI0521 00:58:03.832480 3672 log.go:172] (0xc0005401e0) (1) Data frame sent\nI0521 00:58:03.832498 3672 log.go:172] (0xc000564000) (0xc0005401e0) Stream removed, broadcasting: 1\nI0521 00:58:03.832517 3672 log.go:172] (0xc000564000) Go away received\nI0521 00:58:03.832941 3672 log.go:172] (0xc000564000) (0xc0005401e0) Stream removed, broadcasting: 1\nI0521 00:58:03.832975 3672 log.go:172] (0xc000564000) (0xc000504140) Stream removed, broadcasting: 3\nI0521 00:58:03.832997 3672 log.go:172] (0xc000564000) (0xc0004c8d20) Stream removed, broadcasting: 5\n" May 21 00:58:03.837: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 21 00:58:03.837: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 21 00:58:03.837: INFO: Waiting for statefulset status.replicas updated to 0 May 21 00:58:03.841: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 21 00:58:13.869: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 21 00:58:13.869: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 21 00:58:13.869: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 21 00:58:13.899: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:13.899: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:13.899: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:13.899: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:13.899: INFO: May 21 00:58:13.899: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 00:58:14.985: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:14.985: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:14.985: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:14.985: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:14.985: INFO: May 21 00:58:14.985: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 00:58:15.990: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:15.990: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:15.990: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:15.990: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:15.990: INFO: May 21 00:58:15.990: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 00:58:16.995: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:16.995: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:16.995: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:16.995: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:42 +0000 UTC }] May 21 00:58:16.995: INFO: May 21 00:58:16.995: INFO: StatefulSet ss has not reached scale 0, at 3 May 21 00:58:18.000: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:18.000: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:18.000: INFO: May 21 00:58:18.000: INFO: StatefulSet ss has not reached scale 0, at 1 May 21 00:58:19.004: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:19.004: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:19.004: INFO: May 21 00:58:19.004: INFO: StatefulSet ss has not reached scale 0, at 1 May 21 00:58:20.008: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:20.008: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:20.008: INFO: May 21 00:58:20.008: INFO: StatefulSet ss has not reached scale 0, at 1 May 21 00:58:21.013: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:21.013: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:21.014: INFO: May 21 00:58:21.014: INFO: StatefulSet ss has not reached scale 0, at 1 May 21 00:58:22.017: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:22.017: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:22.017: INFO: May 21 00:58:22.017: INFO: StatefulSet ss has not reached scale 0, at 1 May 21 00:58:23.021: INFO: POD NODE PHASE GRACE CONDITIONS May 21 00:58:23.022: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:58:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-21 00:57:21 +0000 UTC }] May 21 00:58:23.022: INFO: May 21 00:58:23.022: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1494 May 21 00:58:24.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:58:24.183: INFO: rc: 1 May 21 00:58:24.183: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 21 00:58:34.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:58:34.290: INFO: rc: 1 May 21 00:58:34.290: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:58:44.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:58:44.398: INFO: rc: 1 May 21 00:58:44.398: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:58:54.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:58:54.498: INFO: rc: 1 May 21 00:58:54.498: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:59:04.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:59:04.608: INFO: rc: 1 May 21 00:59:04.608: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:59:14.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:59:14.708: INFO: rc: 1 May 21 00:59:14.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:59:24.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:59:24.818: INFO: rc: 1 May 21 00:59:24.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:59:34.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:59:34.913: INFO: rc: 1 May 21 00:59:34.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:59:44.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:59:45.046: INFO: rc: 1 May 21 00:59:45.046: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 00:59:55.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 00:59:55.154: INFO: rc: 1 May 21 00:59:55.154: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:00:05.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:00:05.256: INFO: rc: 1 May 21 01:00:05.256: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:00:15.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:00:15.353: INFO: rc: 1 May 21 01:00:15.353: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:00:25.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:00:25.458: INFO: rc: 1 May 21 01:00:25.458: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:00:35.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:00:35.554: INFO: rc: 1 May 21 01:00:35.554: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:00:45.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:00:45.654: INFO: rc: 1 May 21 01:00:45.654: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:00:55.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:00:55.762: INFO: rc: 1 May 21 01:00:55.762: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:01:05.763: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:01:05.863: INFO: rc: 1 May 21 01:01:05.863: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:01:15.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:01:15.966: INFO: rc: 1 May 21 01:01:15.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:01:25.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:01:26.070: INFO: rc: 1 May 21 01:01:26.071: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:01:36.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:01:36.188: INFO: rc: 1 May 21 01:01:36.188: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:01:46.188: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:01:46.299: INFO: rc: 1 May 21 01:01:46.299: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:01:56.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:01:56.403: INFO: rc: 1 May 21 01:01:56.403: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:02:06.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:02:06.518: INFO: rc: 1 May 21 01:02:06.518: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:02:16.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:02:16.629: INFO: rc: 1 May 21 01:02:16.629: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:02:26.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:02:26.746: INFO: rc: 1 May 21 01:02:26.746: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:02:36.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:02:36.873: INFO: rc: 1 May 21 01:02:36.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:02:46.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:02:46.980: INFO: rc: 1 May 21 01:02:46.980: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:02:56.981: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:02:57.077: INFO: rc: 1 May 21 01:02:57.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:03:07.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:03:07.190: INFO: rc: 1 May 21 01:03:07.190: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:03:17.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:03:17.304: INFO: rc: 1 May 21 01:03:17.304: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 21 01:03:27.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1494 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 21 01:03:27.424: INFO: rc: 1 May 21 01:03:27.424: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 21 01:03:27.424: INFO: Scaling statefulset ss to 0 May 21 01:03:27.432: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 21 01:03:27.435: INFO: Deleting all statefulset in ns statefulset-1494 May 21 01:03:27.437: INFO: Scaling statefulset ss to 0 May 21 01:03:27.445: INFO: Waiting for statefulset status.replicas updated to 0 May 21 01:03:27.447: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:03:27.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1494" for this suite. • [SLOW TEST:365.906 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":253,"skipped":4206,"failed":0} [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:03:27.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:03:27.557: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-47788c77-39c4-42ff-8835-1029692bbc7a" in namespace "security-context-test-2550" to be "Succeeded or Failed" May 21 01:03:27.591: INFO: Pod "busybox-readonly-false-47788c77-39c4-42ff-8835-1029692bbc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 34.248001ms May 21 01:03:29.596: INFO: Pod "busybox-readonly-false-47788c77-39c4-42ff-8835-1029692bbc7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038621388s May 21 01:03:31.601: INFO: Pod "busybox-readonly-false-47788c77-39c4-42ff-8835-1029692bbc7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043679173s May 21 01:03:31.601: INFO: Pod "busybox-readonly-false-47788c77-39c4-42ff-8835-1029692bbc7a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:03:31.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2550" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":254,"skipped":4206,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:03:31.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 21 01:03:31.936: INFO: Waiting up to 5m0s for pod "pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed" in namespace "emptydir-6987" to be "Succeeded or Failed" May 21 01:03:31.954: INFO: Pod "pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed": Phase="Pending", Reason="", readiness=false. Elapsed: 17.695737ms May 21 01:03:34.048: INFO: Pod "pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111482667s May 21 01:03:36.052: INFO: Pod "pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115845667s STEP: Saw pod success May 21 01:03:36.052: INFO: Pod "pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed" satisfied condition "Succeeded or Failed" May 21 01:03:36.055: INFO: Trying to get logs from node latest-worker2 pod pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed container test-container: STEP: delete the pod May 21 01:03:36.117: INFO: Waiting for pod pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed to disappear May 21 01:03:36.130: INFO: Pod pod-f08d6ade-8d8f-4ef4-b587-a0c625ed91ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:03:36.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6987" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:03:36.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 01:03:40.762: INFO: Successfully updated pod "labelsupdatef6995ccd-c164-4a10-bacd-1f552961da44" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:03:42.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5205" for this suite. • [SLOW TEST:6.653 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4291,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:03:42.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 21 01:03:47.439: INFO: Successfully updated pod "annotationupdateeaee8c73-d0f8-4504-a268-d3a0faef59fc" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:03:49.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5151" for this suite. • [SLOW TEST:6.712 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4303,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:03:49.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 21 01:03:49.643: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6" in namespace "downward-api-6814" to be "Succeeded or Failed" May 21 01:03:49.731: INFO: Pod "downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 87.874713ms May 21 01:03:51.839: INFO: Pod "downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195646558s May 21 01:03:53.863: INFO: Pod "downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219556458s STEP: Saw pod success May 21 01:03:53.863: INFO: Pod "downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6" satisfied condition "Succeeded or Failed" May 21 01:03:53.866: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6 container client-container: STEP: delete the pod May 21 01:03:53.908: INFO: Waiting for pod downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6 to disappear May 21 01:03:53.923: INFO: Pod downwardapi-volume-be20e6c7-aa99-4c02-a5c5-498baae38cd6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:03:53.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6814" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:03:53.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-1678 STEP: creating replication controller nodeport-test in namespace services-1678 I0521 01:03:54.072478 8 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-1678, replica count: 2 I0521 01:03:57.122885 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0521 01:04:00.123133 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 21 01:04:00.123: INFO: Creating new exec pod May 21 01:04:05.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1678 execpodflntb -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 21 01:04:08.105: INFO: stderr: "I0521 01:04:08.038258 4306 log.go:172] (0xc00003a420) (0xc0006d0dc0) Create stream\nI0521 01:04:08.038296 4306 log.go:172] (0xc00003a420) (0xc0006d0dc0) Stream added, broadcasting: 1\nI0521 01:04:08.039707 4306 log.go:172] (0xc00003a420) Reply frame received for 1\nI0521 01:04:08.039750 4306 log.go:172] (0xc00003a420) (0xc0006c4640) Create stream\nI0521 01:04:08.039761 4306 log.go:172] (0xc00003a420) (0xc0006c4640) Stream added, broadcasting: 3\nI0521 01:04:08.040389 4306 log.go:172] (0xc00003a420) Reply frame received for 3\nI0521 01:04:08.040425 4306 log.go:172] (0xc00003a420) (0xc0006c4f00) Create stream\nI0521 01:04:08.040441 4306 log.go:172] (0xc00003a420) (0xc0006c4f00) Stream added, broadcasting: 5\nI0521 01:04:08.041051 4306 log.go:172] (0xc00003a420) Reply frame received for 5\nI0521 01:04:08.096506 4306 log.go:172] (0xc00003a420) Data frame received for 5\nI0521 01:04:08.096532 4306 log.go:172] (0xc0006c4f00) (5) Data frame handling\nI0521 01:04:08.096578 4306 log.go:172] (0xc0006c4f00) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0521 01:04:08.096651 4306 log.go:172] (0xc00003a420) Data frame received for 5\nI0521 01:04:08.096672 4306 log.go:172] (0xc0006c4f00) (5) Data frame handling\nI0521 01:04:08.096681 4306 log.go:172] (0xc0006c4f00) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0521 01:04:08.097308 4306 log.go:172] (0xc00003a420) Data frame received for 5\nI0521 01:04:08.097347 4306 log.go:172] (0xc0006c4f00) (5) Data frame handling\nI0521 01:04:08.097369 4306 log.go:172] (0xc00003a420) Data frame received for 3\nI0521 01:04:08.097381 4306 log.go:172] (0xc0006c4640) (3) Data frame handling\nI0521 01:04:08.099094 4306 log.go:172] (0xc00003a420) Data frame received for 1\nI0521 01:04:08.099115 4306 log.go:172] (0xc0006d0dc0) (1) Data frame handling\nI0521 01:04:08.099138 4306 log.go:172] (0xc0006d0dc0) (1) Data frame sent\nI0521 01:04:08.099158 4306 log.go:172] (0xc00003a420) (0xc0006d0dc0) Stream removed, broadcasting: 1\nI0521 01:04:08.099175 4306 log.go:172] (0xc00003a420) Go away received\nI0521 01:04:08.099573 4306 log.go:172] (0xc00003a420) (0xc0006d0dc0) Stream removed, broadcasting: 1\nI0521 01:04:08.099588 4306 log.go:172] (0xc00003a420) (0xc0006c4640) Stream removed, broadcasting: 3\nI0521 01:04:08.099594 4306 log.go:172] (0xc00003a420) (0xc0006c4f00) Stream removed, broadcasting: 5\n" May 21 01:04:08.105: INFO: stdout: "" May 21 01:04:08.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1678 execpodflntb -- /bin/sh -x -c nc -zv -t -w 2 10.107.188.211 80' May 21 01:04:08.301: INFO: stderr: "I0521 01:04:08.233897 4337 log.go:172] (0xc00083afd0) (0xc000950780) Create stream\nI0521 01:04:08.233951 4337 log.go:172] (0xc00083afd0) (0xc000950780) Stream added, broadcasting: 1\nI0521 01:04:08.238549 4337 log.go:172] (0xc00083afd0) Reply frame received for 1\nI0521 01:04:08.238605 4337 log.go:172] (0xc00083afd0) (0xc000822500) Create stream\nI0521 01:04:08.238625 4337 log.go:172] (0xc00083afd0) (0xc000822500) Stream added, broadcasting: 3\nI0521 01:04:08.239802 4337 log.go:172] (0xc00083afd0) Reply frame received for 3\nI0521 01:04:08.239841 4337 log.go:172] (0xc00083afd0) (0xc000668140) Create stream\nI0521 01:04:08.239855 4337 log.go:172] (0xc00083afd0) (0xc000668140) Stream added, broadcasting: 5\nI0521 01:04:08.240890 4337 log.go:172] (0xc00083afd0) Reply frame received for 5\nI0521 01:04:08.296136 4337 log.go:172] (0xc00083afd0) Data frame received for 5\nI0521 01:04:08.296172 4337 log.go:172] (0xc000668140) (5) Data frame handling\nI0521 01:04:08.296187 4337 log.go:172] (0xc000668140) (5) Data frame sent\n+ nc -zv -t -w 2 10.107.188.211 80\nConnection to 10.107.188.211 80 port [tcp/http] succeeded!\nI0521 01:04:08.296211 4337 log.go:172] (0xc00083afd0) Data frame received for 3\nI0521 01:04:08.296237 4337 log.go:172] (0xc000822500) (3) Data frame handling\nI0521 01:04:08.296267 4337 log.go:172] (0xc00083afd0) Data frame received for 5\nI0521 01:04:08.296278 4337 log.go:172] (0xc000668140) (5) Data frame handling\nI0521 01:04:08.297931 4337 log.go:172] (0xc00083afd0) Data frame received for 1\nI0521 01:04:08.297947 4337 log.go:172] (0xc000950780) (1) Data frame handling\nI0521 01:04:08.297961 4337 log.go:172] (0xc000950780) (1) Data frame sent\nI0521 01:04:08.297984 4337 log.go:172] (0xc00083afd0) (0xc000950780) Stream removed, broadcasting: 1\nI0521 01:04:08.298010 4337 log.go:172] (0xc00083afd0) Go away received\nI0521 01:04:08.298394 4337 log.go:172] (0xc00083afd0) (0xc000950780) Stream removed, broadcasting: 1\nI0521 01:04:08.298411 4337 log.go:172] (0xc00083afd0) (0xc000822500) Stream removed, broadcasting: 3\nI0521 01:04:08.298420 4337 log.go:172] (0xc00083afd0) (0xc000668140) Stream removed, broadcasting: 5\n" May 21 01:04:08.302: INFO: stdout: "" May 21 01:04:08.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1678 execpodflntb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30478' May 21 01:04:08.523: INFO: stderr: "I0521 01:04:08.436572 4357 log.go:172] (0xc00003a8f0) (0xc0000f3f40) Create stream\nI0521 01:04:08.436640 4357 log.go:172] (0xc00003a8f0) (0xc0000f3f40) Stream added, broadcasting: 1\nI0521 01:04:08.439800 4357 log.go:172] (0xc00003a8f0) Reply frame received for 1\nI0521 01:04:08.439839 4357 log.go:172] (0xc00003a8f0) (0xc000abe000) Create stream\nI0521 01:04:08.439851 4357 log.go:172] (0xc00003a8f0) (0xc000abe000) Stream added, broadcasting: 3\nI0521 01:04:08.440743 4357 log.go:172] (0xc00003a8f0) Reply frame received for 3\nI0521 01:04:08.440784 4357 log.go:172] (0xc00003a8f0) (0xc0005486e0) Create stream\nI0521 01:04:08.440797 4357 log.go:172] (0xc00003a8f0) (0xc0005486e0) Stream added, broadcasting: 5\nI0521 01:04:08.442001 4357 log.go:172] (0xc00003a8f0) Reply frame received for 5\nI0521 01:04:08.517406 4357 log.go:172] (0xc00003a8f0) Data frame received for 3\nI0521 01:04:08.517438 4357 log.go:172] (0xc000abe000) (3) Data frame handling\nI0521 01:04:08.517466 4357 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0521 01:04:08.517481 4357 log.go:172] (0xc0005486e0) (5) Data frame handling\nI0521 01:04:08.517492 4357 log.go:172] (0xc0005486e0) (5) Data frame sent\nI0521 01:04:08.517500 4357 log.go:172] (0xc00003a8f0) Data frame received for 5\nI0521 01:04:08.517506 4357 log.go:172] (0xc0005486e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30478\nConnection to 172.17.0.13 30478 port [tcp/30478] succeeded!\nI0521 01:04:08.519576 4357 log.go:172] (0xc00003a8f0) Data frame received for 1\nI0521 01:04:08.519604 4357 log.go:172] (0xc0000f3f40) (1) Data frame handling\nI0521 01:04:08.519643 4357 log.go:172] (0xc0000f3f40) (1) Data frame sent\nI0521 01:04:08.519668 4357 log.go:172] (0xc00003a8f0) (0xc0000f3f40) Stream removed, broadcasting: 1\nI0521 01:04:08.519684 4357 log.go:172] (0xc00003a8f0) Go away received\nI0521 01:04:08.520145 4357 log.go:172] (0xc00003a8f0) (0xc0000f3f40) Stream removed, broadcasting: 1\nI0521 01:04:08.520175 4357 log.go:172] (0xc00003a8f0) (0xc000abe000) Stream removed, broadcasting: 3\nI0521 01:04:08.520190 4357 log.go:172] (0xc00003a8f0) (0xc0005486e0) Stream removed, broadcasting: 5\n" May 21 01:04:08.523: INFO: stdout: "" May 21 01:04:08.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1678 execpodflntb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30478' May 21 01:04:08.725: INFO: stderr: "I0521 01:04:08.651678 4376 log.go:172] (0xc000a753f0) (0xc00070eb40) Create stream\nI0521 01:04:08.651734 4376 log.go:172] (0xc000a753f0) (0xc00070eb40) Stream added, broadcasting: 1\nI0521 01:04:08.656245 4376 log.go:172] (0xc000a753f0) Reply frame received for 1\nI0521 01:04:08.656284 4376 log.go:172] (0xc000a753f0) (0xc0006f2dc0) Create stream\nI0521 01:04:08.656296 4376 log.go:172] (0xc000a753f0) (0xc0006f2dc0) Stream added, broadcasting: 3\nI0521 01:04:08.657098 4376 log.go:172] (0xc000a753f0) Reply frame received for 3\nI0521 01:04:08.657282 4376 log.go:172] (0xc000a753f0) (0xc0004b8280) Create stream\nI0521 01:04:08.657296 4376 log.go:172] (0xc000a753f0) (0xc0004b8280) Stream added, broadcasting: 5\nI0521 01:04:08.657966 4376 log.go:172] (0xc000a753f0) Reply frame received for 5\nI0521 01:04:08.718368 4376 log.go:172] (0xc000a753f0) Data frame received for 3\nI0521 01:04:08.718419 4376 log.go:172] (0xc0006f2dc0) (3) Data frame handling\nI0521 01:04:08.718457 4376 log.go:172] (0xc000a753f0) Data frame received for 5\nI0521 01:04:08.718502 4376 log.go:172] (0xc0004b8280) (5) Data frame handling\nI0521 01:04:08.718530 4376 log.go:172] (0xc0004b8280) (5) Data frame sent\nI0521 01:04:08.718545 4376 log.go:172] (0xc000a753f0) Data frame received for 5\nI0521 01:04:08.718554 4376 log.go:172] (0xc0004b8280) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30478\nConnection to 172.17.0.12 30478 port [tcp/30478] succeeded!\nI0521 01:04:08.719728 4376 log.go:172] (0xc000a753f0) Data frame received for 1\nI0521 01:04:08.719759 4376 log.go:172] (0xc00070eb40) (1) Data frame handling\nI0521 01:04:08.719783 4376 log.go:172] (0xc00070eb40) (1) Data frame sent\nI0521 01:04:08.719808 4376 log.go:172] (0xc000a753f0) (0xc00070eb40) Stream removed, broadcasting: 1\nI0521 01:04:08.719904 4376 log.go:172] (0xc000a753f0) Go away received\nI0521 01:04:08.720172 4376 log.go:172] (0xc000a753f0) (0xc00070eb40) Stream removed, broadcasting: 1\nI0521 01:04:08.720206 4376 log.go:172] (0xc000a753f0) (0xc0006f2dc0) Stream removed, broadcasting: 3\nI0521 01:04:08.720231 4376 log.go:172] (0xc000a753f0) (0xc0004b8280) Stream removed, broadcasting: 5\n" May 21 01:04:08.725: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:04:08.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1678" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:14.803 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":259,"skipped":4333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:04:08.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:04:08.776: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 21 01:04:08.818: INFO: Pod name sample-pod: Found 0 pods out of 1 May 21 01:04:13.863: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 21 01:04:13.863: INFO: Creating deployment "test-rolling-update-deployment" May 21 01:04:13.887: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 21 01:04:13.938: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 21 01:04:16.075: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 21 01:04:16.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 01:04:18.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619854, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 01:04:20.082: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 21 01:04:20.092: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6905 /apis/apps/v1/namespaces/deployment-6905/deployments/test-rolling-update-deployment 7b65e51f-5da7-4c72-9f9f-39911995f1b4 6369326 1 2020-05-21 01:04:13 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-21 01:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-21 01:04:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a9a208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-21 01:04:14 +0000 UTC,LastTransitionTime:2020-05-21 01:04:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-21 01:04:18 +0000 UTC,LastTransitionTime:2020-05-21 01:04:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 21 01:04:20.096: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-6905 /apis/apps/v1/namespaces/deployment-6905/replicasets/test-rolling-update-deployment-df7bb669b 35470caf-c600-433c-96eb-fbc2ee7724e9 6369315 1 2020-05-21 01:04:13 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7b65e51f-5da7-4c72-9f9f-39911995f1b4 0xc0050ad4c0 0xc0050ad4c1}] [] [{kube-controller-manager Update apps/v1 2020-05-21 01:04:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b65e51f-5da7-4c72-9f9f-39911995f1b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050ad538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 21 01:04:20.096: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 21 01:04:20.096: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6905 /apis/apps/v1/namespaces/deployment-6905/replicasets/test-rolling-update-controller 127828c6-57bb-47d7-81f5-555fc6d3aa04 6369325 2 2020-05-21 01:04:08 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7b65e51f-5da7-4c72-9f9f-39911995f1b4 0xc0050ad3b7 0xc0050ad3b8}] [] [{e2e.test Update apps/v1 2020-05-21 01:04:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-21 01:04:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b65e51f-5da7-4c72-9f9f-39911995f1b4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0050ad458 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 01:04:20.101: INFO: Pod "test-rolling-update-deployment-df7bb669b-rzw7d" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-rzw7d test-rolling-update-deployment-df7bb669b- deployment-6905 /api/v1/namespaces/deployment-6905/pods/test-rolling-update-deployment-df7bb669b-rzw7d 9ce7c9bf-4056-4c14-a13f-a00f86e1c7dc 6369314 0 2020-05-21 01:04:14 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b 35470caf-c600-433c-96eb-fbc2ee7724e9 0xc002a9a650 0xc002a9a651}] [] [{kube-controller-manager Update v1 2020-05-21 01:04:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35470caf-c600-433c-96eb-fbc2ee7724e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-21 01:04:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.5\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j557s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j557s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j557s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.5,StartTime:2020-05-21 01:04:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-21 01:04:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://1faf94f86835681bbbaf417e8cff74978443012199451ef7b2dcaa0edee40f25,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:04:20.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6905" for this suite. • [SLOW TEST:11.376 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":260,"skipped":4357,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:04:20.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 01:04:20.482: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 01:04:22.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619860, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619860, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619860, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619860, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 01:04:25.523: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:04:25.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:04:27.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8876" for this suite. STEP: Destroying namespace "webhook-8876-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.030 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":261,"skipped":4364,"failed":0} S ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:04:27.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:04:27.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5712" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":262,"skipped":4365,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:04:27.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:04:27.453: INFO: Creating deployment "test-recreate-deployment" May 21 01:04:27.660: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 21 01:04:27.697: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 21 01:04:30.017: INFO: Waiting deployment "test-recreate-deployment" to complete May 21 01:04:30.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619867, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619867, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619868, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619867, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 01:04:32.023: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 21 01:04:32.030: INFO: Updating deployment test-recreate-deployment May 21 01:04:32.030: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 21 01:04:32.712: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3887 /apis/apps/v1/namespaces/deployment-3887/deployments/test-recreate-deployment 0cf69d67-13f0-4be4-ba38-2c24426686ca 6369498 2 2020-05-21 01:04:27 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-21 01:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-21 01:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00463d858 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-21 01:04:32 +0000 UTC,LastTransitionTime:2020-05-21 01:04:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-21 01:04:32 +0000 UTC,LastTransitionTime:2020-05-21 01:04:27 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 21 01:04:32.741: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-3887 /apis/apps/v1/namespaces/deployment-3887/replicasets/test-recreate-deployment-d5667d9c7 804ac397-14f1-4408-95e9-79ead54ae330 6369497 1 2020-05-21 01:04:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0cf69d67-13f0-4be4-ba38-2c24426686ca 0xc00463dd60 0xc00463dd61}] [] [{kube-controller-manager Update apps/v1 2020-05-21 01:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cf69d67-13f0-4be4-ba38-2c24426686ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00463dde8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 01:04:32.741: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 21 01:04:32.742: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-3887 /apis/apps/v1/namespaces/deployment-3887/replicasets/test-recreate-deployment-6d65b9f6d8 0ccfef1c-7c80-4b8b-9816-bb5d2c6d3b74 6369487 2 2020-05-21 01:04:27 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0cf69d67-13f0-4be4-ba38-2c24426686ca 0xc00463dc67 0xc00463dc68}] [] [{kube-controller-manager Update apps/v1 2020-05-21 01:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cf69d67-13f0-4be4-ba38-2c24426686ca\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00463dcf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 21 01:04:32.746: INFO: Pod "test-recreate-deployment-d5667d9c7-j4bzd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-j4bzd test-recreate-deployment-d5667d9c7- deployment-3887 /api/v1/namespaces/deployment-3887/pods/test-recreate-deployment-d5667d9c7-j4bzd 9857ccd0-6ff1-4edc-9d49-292bb4ff7b4d 6369501 0 2020-05-21 01:04:32 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 804ac397-14f1-4408-95e9-79ead54ae330 0xc0008566b0 0xc0008566b1}] [] [{kube-controller-manager Update v1 2020-05-21 01:04:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"804ac397-14f1-4408-95e9-79ead54ae330\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-21 01:04:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vrdql,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vrdql,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vrdql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-21 01:04:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-21 01:04:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:04:32.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3887" for this suite. • [SLOW TEST:5.503 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":263,"skipped":4397,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:04:32.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:04:32.867: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:04:34.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7442" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":264,"skipped":4407,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:04:34.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 21 01:04:43.224: INFO: 10 pods remaining May 21 01:04:43.224: INFO: 0 pods has nil DeletionTimestamp May 21 01:04:43.224: INFO: May 21 01:04:44.458: INFO: 0 pods remaining May 21 01:04:44.458: INFO: 0 pods has nil DeletionTimestamp May 21 01:04:44.458: INFO: STEP: Gathering metrics W0521 01:04:45.167119 8 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 21 01:04:45.167: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:04:45.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1580" for this suite. • [SLOW TEST:10.911 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":265,"skipped":4412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:04:45.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 21 01:04:45.975: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:05:02.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2463" for this suite. • [SLOW TEST:17.626 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":266,"skipped":4439,"failed":0} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:05:02.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 21 01:05:03.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 21 01:05:03.256: INFO: stderr: "" May 21 01:05:03.256: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:05:03.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3464" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":267,"skipped":4439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:05:03.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d49d7f88-cd96-4cef-9e7e-318705f57c80 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d49d7f88-cd96-4cef-9e7e-318705f57c80 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:05:11.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8362" for this suite. • [SLOW TEST:8.255 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":268,"skipped":4463,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:05:11.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 21 01:05:11.727: INFO: Waiting up to 5m0s for pod "pod-be69e961-db18-44da-9792-7b5b64dbc828" in namespace "emptydir-5035" to be "Succeeded or Failed" May 21 01:05:11.760: INFO: Pod "pod-be69e961-db18-44da-9792-7b5b64dbc828": Phase="Pending", Reason="", readiness=false. Elapsed: 32.423017ms May 21 01:05:13.780: INFO: Pod "pod-be69e961-db18-44da-9792-7b5b64dbc828": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0521538s May 21 01:05:15.784: INFO: Pod "pod-be69e961-db18-44da-9792-7b5b64dbc828": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056261477s STEP: Saw pod success May 21 01:05:15.784: INFO: Pod "pod-be69e961-db18-44da-9792-7b5b64dbc828" satisfied condition "Succeeded or Failed" May 21 01:05:15.788: INFO: Trying to get logs from node latest-worker pod pod-be69e961-db18-44da-9792-7b5b64dbc828 container test-container: STEP: delete the pod May 21 01:05:15.876: INFO: Waiting for pod pod-be69e961-db18-44da-9792-7b5b64dbc828 to disappear May 21 01:05:15.889: INFO: Pod pod-be69e961-db18-44da-9792-7b5b64dbc828 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:05:15.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5035" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":269,"skipped":4478,"failed":0} SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:05:15.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 21 01:05:15.956: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:05:34.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2109" for this suite. • [SLOW TEST:18.994 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":270,"skipped":4481,"failed":0} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:05:34.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-8680 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8680 to expose endpoints map[] May 21 01:05:35.030: INFO: Get endpoints failed (10.712952ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 21 01:05:36.033: INFO: successfully validated that service multi-endpoint-test in namespace services-8680 exposes endpoints map[] (1.013702984s elapsed) STEP: Creating pod pod1 in namespace services-8680 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8680 to expose endpoints map[pod1:[100]] May 21 01:05:40.166: INFO: successfully validated that service multi-endpoint-test in namespace services-8680 exposes endpoints map[pod1:[100]] (4.126602796s elapsed) STEP: Creating pod pod2 in namespace services-8680 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8680 to expose endpoints map[pod1:[100] pod2:[101]] May 21 01:05:44.263: INFO: successfully validated that service multi-endpoint-test in namespace services-8680 exposes endpoints map[pod1:[100] pod2:[101]] (4.076472719s elapsed) STEP: Deleting pod pod1 in namespace services-8680 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8680 to expose endpoints map[pod2:[101]] May 21 01:05:45.325: INFO: successfully validated that service multi-endpoint-test in namespace services-8680 exposes endpoints map[pod2:[101]] (1.057705754s elapsed) STEP: Deleting pod pod2 in namespace services-8680 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8680 to expose endpoints map[] May 21 01:05:46.362: INFO: successfully validated that service multi-endpoint-test in namespace services-8680 exposes endpoints map[] (1.031535749s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:05:46.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8680" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.591 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":271,"skipped":4483,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:05:46.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 01:05:46.995: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 01:05:49.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619947, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619947, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619947, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619946, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 21 01:05:51.300: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619947, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619947, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619947, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725619946, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 01:05:54.326: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:05:54.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6606-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:05:55.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5772" for this suite. STEP: Destroying namespace "webhook-5772-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.198 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":272,"skipped":4496,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:05:55.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:06:11.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4894" for this suite. STEP: Destroying namespace "nsdeletetest-2691" for this suite. May 21 01:06:11.465: INFO: Namespace nsdeletetest-2691 was already deleted STEP: Destroying namespace "nsdeletetest-5281" for this suite. • [SLOW TEST:15.788 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":273,"skipped":4511,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:06:11.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:06:15.658: INFO: Waiting up to 5m0s for pod "client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212" in namespace "pods-6751" to be "Succeeded or Failed" May 21 01:06:15.677: INFO: Pod "client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212": Phase="Pending", Reason="", readiness=false. Elapsed: 18.480355ms May 21 01:06:17.681: INFO: Pod "client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022220239s May 21 01:06:19.684: INFO: Pod "client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025948012s STEP: Saw pod success May 21 01:06:19.684: INFO: Pod "client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212" satisfied condition "Succeeded or Failed" May 21 01:06:19.688: INFO: Trying to get logs from node latest-worker2 pod client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212 container env3cont: STEP: delete the pod May 21 01:06:19.708: INFO: Waiting for pod client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212 to disappear May 21 01:06:19.712: INFO: Pod client-envvars-34d037f7-4ae3-47ea-8a5e-ce2397a87212 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:06:19.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6751" for this suite. • [SLOW TEST:8.249 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":274,"skipped":4526,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:06:19.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:06:26.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7706" for this suite. STEP: Destroying namespace "nsdeletetest-7073" for this suite. May 21 01:06:26.138: INFO: Namespace nsdeletetest-7073 was already deleted STEP: Destroying namespace "nsdeletetest-4502" for this suite. • [SLOW TEST:6.426 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":275,"skipped":4530,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:06:26.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf in namespace container-probe-9701 May 21 01:06:30.295: INFO: Started pod liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf in namespace container-probe-9701 STEP: checking the pod's current state and verifying that restartCount is present May 21 01:06:30.298: INFO: Initial restart count of pod liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf is 0 May 21 01:06:44.368: INFO: Restart count of pod container-probe-9701/liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf is now 1 (14.069854147s elapsed) May 21 01:07:04.406: INFO: Restart count of pod container-probe-9701/liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf is now 2 (34.108240298s elapsed) May 21 01:07:24.445: INFO: Restart count of pod container-probe-9701/liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf is now 3 (54.146750677s elapsed) May 21 01:07:44.489: INFO: Restart count of pod container-probe-9701/liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf is now 4 (1m14.191510152s elapsed) May 21 01:08:52.659: INFO: Restart count of pod container-probe-9701/liveness-2676935e-7b1c-44aa-9dc2-466c7e5243bf is now 5 (2m22.361355422s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:08:52.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9701" for this suite. • [SLOW TEST:146.575 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":276,"skipped":4548,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:08:52.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2350.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2350.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 01:08:59.296: INFO: DNS probes using dns-test-549734a1-2673-403a-acc4-707ada20b68d succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2350.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2350.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 01:09:05.443: INFO: File wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:05.446: INFO: File jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:05.446: INFO: Lookups using dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 failed for: [wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local] May 21 01:09:10.452: INFO: File wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:10.456: INFO: File jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:10.456: INFO: Lookups using dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 failed for: [wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local] May 21 01:09:15.451: INFO: File wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:15.455: INFO: File jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:15.455: INFO: Lookups using dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 failed for: [wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local] May 21 01:09:20.450: INFO: File wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:20.453: INFO: File jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:20.453: INFO: Lookups using dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 failed for: [wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local] May 21 01:09:25.451: INFO: File wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:25.457: INFO: File jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local from pod dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 contains 'foo.example.com. ' instead of 'bar.example.com.' May 21 01:09:25.457: INFO: Lookups using dns-2350/dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 failed for: [wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local] May 21 01:09:30.456: INFO: DNS probes using dns-test-f0ba1a79-fbde-4a2a-82f2-deae9e2df8d7 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2350.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2350.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2350.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2350.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 21 01:09:37.391: INFO: DNS probes using dns-test-d765e75a-3ef0-4f69-a526-7a1293e3fe56 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:09:37.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2350" for this suite. • [SLOW TEST:44.824 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":277,"skipped":4569,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:09:37.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-c3ab6c19-50e1-4527-86ef-59075e529406 in namespace container-probe-2271 May 21 01:09:42.176: INFO: Started pod liveness-c3ab6c19-50e1-4527-86ef-59075e529406 in namespace container-probe-2271 STEP: checking the pod's current state and verifying that restartCount is present May 21 01:09:42.179: INFO: Initial restart count of pod liveness-c3ab6c19-50e1-4527-86ef-59075e529406 is 0 May 21 01:10:06.370: INFO: Restart count of pod container-probe-2271/liveness-c3ab6c19-50e1-4527-86ef-59075e529406 is now 1 (24.191887129s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:10:06.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2271" for this suite. • [SLOW TEST:28.875 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4589,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:10:06.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:10:06.505: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-be6b71da-2534-410f-aa8e-2b248fe81b81" in namespace "security-context-test-5437" to be "Succeeded or Failed" May 21 01:10:06.531: INFO: Pod "busybox-privileged-false-be6b71da-2534-410f-aa8e-2b248fe81b81": Phase="Pending", Reason="", readiness=false. Elapsed: 25.477807ms May 21 01:10:08.535: INFO: Pod "busybox-privileged-false-be6b71da-2534-410f-aa8e-2b248fe81b81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029930728s May 21 01:10:10.539: INFO: Pod "busybox-privileged-false-be6b71da-2534-410f-aa8e-2b248fe81b81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034053311s May 21 01:10:10.539: INFO: Pod "busybox-privileged-false-be6b71da-2534-410f-aa8e-2b248fe81b81" satisfied condition "Succeeded or Failed" May 21 01:10:10.560: INFO: Got logs for pod "busybox-privileged-false-be6b71da-2534-410f-aa8e-2b248fe81b81": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:10:10.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5437" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4592,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:10:10.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:12:10.657: INFO: Deleting pod "var-expansion-fed7211f-3fe8-4454-9079-1636a99d6ada" in namespace "var-expansion-2966" May 21 01:12:10.661: INFO: Wait up to 5m0s for pod "var-expansion-fed7211f-3fe8-4454-9079-1636a99d6ada" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:12:12.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2966" for this suite. • [SLOW TEST:122.129 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":280,"skipped":4596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:12:12.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:12:30.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5789" for this suite. • [SLOW TEST:18.138 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":281,"skipped":4643,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:12:30.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 21 01:12:38.986: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 01:12:39.040: INFO: Pod pod-with-poststart-exec-hook still exists May 21 01:12:41.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 01:12:41.045: INFO: Pod pod-with-poststart-exec-hook still exists May 21 01:12:43.040: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 21 01:12:43.046: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:12:43.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3515" for this suite. • [SLOW TEST:12.217 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4656,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:12:43.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:12:47.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7702" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":283,"skipped":4668,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:12:47.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9a549cda-d062-458a-8c96-8f4f47372cb8 STEP: Creating a pod to test consume secrets May 21 01:12:47.971: INFO: Waiting up to 5m0s for pod "pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64" in namespace "secrets-5911" to be "Succeeded or Failed" May 21 01:12:47.999: INFO: Pod "pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64": Phase="Pending", Reason="", readiness=false. Elapsed: 28.475728ms May 21 01:12:50.004: INFO: Pod "pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033049539s May 21 01:12:52.009: INFO: Pod "pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037814537s STEP: Saw pod success May 21 01:12:52.009: INFO: Pod "pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64" satisfied condition "Succeeded or Failed" May 21 01:12:52.012: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64 container secret-volume-test: STEP: delete the pod May 21 01:12:52.295: INFO: Waiting for pod pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64 to disappear May 21 01:12:52.300: INFO: Pod pod-secrets-751aabd1-4d27-469d-9bee-219cf93ded64 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:12:52.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5911" for this suite. STEP: Destroying namespace "secret-namespace-5979" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4672,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:12:52.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 21 01:12:52.397: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 21 01:12:55.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7141 create -f -' May 21 01:12:58.854: INFO: stderr: "" May 21 01:12:58.855: INFO: stdout: "e2e-test-crd-publish-openapi-5011-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 21 01:12:58.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7141 delete e2e-test-crd-publish-openapi-5011-crds test-cr' May 21 01:12:58.966: INFO: stderr: "" May 21 01:12:58.967: INFO: stdout: "e2e-test-crd-publish-openapi-5011-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 21 01:12:58.967: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7141 apply -f -' May 21 01:12:59.257: INFO: stderr: "" May 21 01:12:59.257: INFO: stdout: "e2e-test-crd-publish-openapi-5011-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 21 01:12:59.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7141 delete e2e-test-crd-publish-openapi-5011-crds test-cr' May 21 01:12:59.374: INFO: stderr: "" May 21 01:12:59.374: INFO: stdout: "e2e-test-crd-publish-openapi-5011-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 21 01:12:59.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5011-crds' May 21 01:12:59.666: INFO: stderr: "" May 21 01:12:59.666: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5011-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:13:02.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7141" for this suite. • [SLOW TEST:10.314 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":285,"skipped":4674,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:13:02.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 01:13:03.143: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 01:13:05.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620383, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620383, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620383, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620383, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 01:13:08.201: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:13:08.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9080" for this suite. STEP: Destroying namespace "webhook-9080-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.818 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":286,"skipped":4683,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:13:08.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 21 01:13:08.564: INFO: >>> kubeConfig: /root/.kube/config May 21 01:13:11.514: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:13:21.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6954" for this suite. • [SLOW TEST:12.777 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":287,"skipped":4692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 21 01:13:21.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 21 01:13:22.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 21 01:13:24.033: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620402, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620402, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620402, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725620402, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 21 01:13:27.088: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 21 01:13:27.111: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 21 01:13:27.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7794" for this suite. STEP: Destroying namespace "webhook-7794-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.198 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":288,"skipped":4736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 21 01:13:27.422: INFO: Running AfterSuite actions on all nodes May 21 01:13:27.422: INFO: Running AfterSuite actions on node 1 May 21 01:13:27.422: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5747.111 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS