I0204 12:09:45.270414 7 e2e.go:129] Starting e2e run "566162d3-aff6-46f4-8931-4930f825c480" on Ginkgo node 1 {"msg":"Test Suite starting","total":311,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1612440583 - Will randomize all specs Will run 311 of 5640 specs Feb 4 12:09:45.334: INFO: >>> kubeConfig: /root/.kube/config Feb 4 12:09:45.338: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 4 12:09:45.357: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 4 12:09:45.382: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 4 12:09:45.382: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 4 12:09:45.382: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 4 12:09:45.388: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 4 12:09:45.388: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 4 12:09:45.388: INFO: e2e test version: v1.21.0-alpha.2 Feb 4 12:09:45.389: INFO: kube-apiserver version: v1.21.0-alpha.0 Feb 4 12:09:45.389: INFO: >>> kubeConfig: /root/.kube/config Feb 4 12:09:45.393: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:09:45.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 4 12:09:45.512: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 12:09:45.538: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408" in namespace "projected-9853" to be "Succeeded or Failed" Feb 4 12:09:45.609: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Pending", Reason="", readiness=false. Elapsed: 71.074954ms Feb 4 12:09:48.690: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Pending", Reason="", readiness=false. Elapsed: 3.151552208s Feb 4 12:09:50.820: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Pending", Reason="", readiness=false. Elapsed: 5.281477413s Feb 4 12:09:53.096: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Pending", Reason="", readiness=false. Elapsed: 7.557518058s Feb 4 12:09:55.100: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Pending", Reason="", readiness=false. Elapsed: 9.561520504s Feb 4 12:09:57.104: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Pending", Reason="", readiness=false. Elapsed: 11.566003937s Feb 4 12:10:00.239: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Pending", Reason="", readiness=false. Elapsed: 14.700302953s Feb 4 12:10:02.244: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.705483574s STEP: Saw pod success Feb 4 12:10:02.244: INFO: Pod "downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408" satisfied condition "Succeeded or Failed" Feb 4 12:10:02.247: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408 container client-container: STEP: delete the pod Feb 4 12:10:02.438: INFO: Waiting for pod downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408 to disappear Feb 4 12:10:02.489: INFO: Pod downwardapi-volume-06dbf40c-295b-4d10-9b3d-c5f395e71408 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:10:02.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9853" for this suite. • [SLOW TEST:17.103 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":311,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:10:02.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 12:10:03.636: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Feb 4 12:10:05.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:08.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:09.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:12.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:13.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037403, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 12:10:16.697: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:10:16.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-227" for this suite. STEP: Destroying namespace "webhook-227-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.726 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":311,"completed":2,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:10:17.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 12:10:22.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-7fd5fddcbd\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:25.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:27.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:28.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:30.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:33.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:36.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:36.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:38.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:40.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:10:42.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037420, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037422, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037418, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 12:10:46.569: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:10:46.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9521" for this suite. STEP: Destroying namespace "webhook-9521-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:31.902 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":311,"completed":3,"skipped":66,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:10:49.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 12:10:54.413: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 12:10:58.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:11:00.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:11:02.358: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037454, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 12:11:05.601: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:11:07.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9613" for this suite. STEP: Destroying namespace "webhook-9613-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:18.383 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":311,"completed":4,"skipped":66,"failed":0} SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:11:07.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:11:08.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7419" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":311,"completed":5,"skipped":70,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:11:08.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 4 12:11:09.807: INFO: Waiting up to 5m0s for pod "pod-67b18e76-1768-4854-84a8-470a02e3bffe" in namespace "emptydir-2516" to be "Succeeded or Failed" Feb 4 12:11:10.024: INFO: Pod "pod-67b18e76-1768-4854-84a8-470a02e3bffe": Phase="Pending", Reason="", readiness=false. Elapsed: 217.508992ms Feb 4 12:11:12.446: INFO: Pod "pod-67b18e76-1768-4854-84a8-470a02e3bffe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.639159551s Feb 4 12:11:14.847: INFO: Pod "pod-67b18e76-1768-4854-84a8-470a02e3bffe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.040380251s Feb 4 12:11:17.082: INFO: Pod "pod-67b18e76-1768-4854-84a8-470a02e3bffe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.274568507s STEP: Saw pod success Feb 4 12:11:17.082: INFO: Pod "pod-67b18e76-1768-4854-84a8-470a02e3bffe" satisfied condition "Succeeded or Failed" Feb 4 12:11:17.303: INFO: Trying to get logs from node latest-worker2 pod pod-67b18e76-1768-4854-84a8-470a02e3bffe container test-container: STEP: delete the pod Feb 4 12:11:17.385: INFO: Waiting for pod pod-67b18e76-1768-4854-84a8-470a02e3bffe to disappear Feb 4 12:11:17.432: INFO: Pod pod-67b18e76-1768-4854-84a8-470a02e3bffe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:11:17.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2516" for this suite. • [SLOW TEST:8.496 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":6,"skipped":125,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:11:17.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0204 12:12:01.935828 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 4 12:13:04.029: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Feb 4 12:13:04.030: INFO: Deleting pod "simpletest.rc-2bm2x" in namespace "gc-3249" Feb 4 12:13:04.360: INFO: Deleting pod "simpletest.rc-4427m" in namespace "gc-3249" Feb 4 12:13:06.816: INFO: Deleting pod "simpletest.rc-46594" in namespace "gc-3249" Feb 4 12:13:08.164: INFO: Deleting pod "simpletest.rc-58hht" in namespace "gc-3249" Feb 4 12:13:08.875: INFO: Deleting pod "simpletest.rc-6q22n" in namespace "gc-3249" Feb 4 12:13:10.039: INFO: Deleting pod "simpletest.rc-84nck" in namespace "gc-3249" Feb 4 12:13:11.342: INFO: Deleting pod "simpletest.rc-kwjgt" in namespace "gc-3249" Feb 4 12:13:12.493: INFO: Deleting pod "simpletest.rc-ng8kf" in namespace "gc-3249" Feb 4 12:13:13.571: INFO: Deleting pod "simpletest.rc-q7djp" in namespace "gc-3249" Feb 4 12:13:14.714: INFO: Deleting pod "simpletest.rc-x22xt" in namespace "gc-3249" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:13:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3249" for this suite. • [SLOW TEST:118.304 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":311,"completed":7,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:13:15.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 4 12:13:36.928: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:36.928: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:37.091855 7 log.go:181] (0xc002ef4790) (0xc003683860) Create stream I0204 12:13:37.091884 7 log.go:181] (0xc002ef4790) (0xc003683860) Stream added, broadcasting: 1 I0204 12:13:37.093497 7 log.go:181] (0xc002ef4790) Reply frame received for 1 I0204 12:13:37.093525 7 log.go:181] (0xc002ef4790) (0xc003683900) Create stream I0204 12:13:37.093540 7 log.go:181] (0xc002ef4790) (0xc003683900) Stream added, broadcasting: 3 I0204 12:13:37.094121 7 log.go:181] (0xc002ef4790) Reply frame received for 3 I0204 12:13:37.094147 7 log.go:181] (0xc002ef4790) (0xc0011d14a0) Create stream I0204 12:13:37.094158 7 log.go:181] (0xc002ef4790) (0xc0011d14a0) Stream added, broadcasting: 5 I0204 12:13:37.094672 7 log.go:181] (0xc002ef4790) Reply frame received for 5 I0204 12:13:37.169247 7 log.go:181] (0xc002ef4790) Data frame received for 5 I0204 12:13:37.169275 7 log.go:181] (0xc0011d14a0) (5) Data frame handling I0204 12:13:37.169311 7 log.go:181] (0xc002ef4790) Data frame received for 3 I0204 12:13:37.169318 7 log.go:181] (0xc003683900) (3) Data frame handling I0204 12:13:37.169323 7 log.go:181] (0xc003683900) (3) Data frame sent I0204 12:13:37.169330 7 log.go:181] (0xc002ef4790) Data frame received for 3 I0204 12:13:37.169335 7 log.go:181] (0xc003683900) (3) Data frame handling I0204 12:13:37.169849 7 log.go:181] (0xc002ef4790) Data frame received for 1 I0204 12:13:37.169881 7 log.go:181] (0xc003683860) (1) Data frame handling I0204 12:13:37.169906 7 log.go:181] (0xc003683860) (1) Data frame sent I0204 12:13:37.169926 7 log.go:181] (0xc002ef4790) (0xc003683860) Stream removed, broadcasting: 1 I0204 12:13:37.169958 7 log.go:181] (0xc002ef4790) Go away received I0204 12:13:37.170175 7 log.go:181] (0xc002ef4790) (0xc003683860) Stream removed, broadcasting: 1 I0204 12:13:37.170197 7 log.go:181] (0xc002ef4790) (0xc003683900) Stream removed, broadcasting: 3 I0204 12:13:37.170214 7 log.go:181] (0xc002ef4790) (0xc0011d14a0) Stream removed, broadcasting: 5 Feb 4 12:13:37.170: INFO: Exec stderr: "" Feb 4 12:13:37.170: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:37.170: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:37.244556 7 log.go:181] (0xc002b0a580) (0xc001308be0) Create stream I0204 12:13:37.244577 7 log.go:181] (0xc002b0a580) (0xc001308be0) Stream added, broadcasting: 1 I0204 12:13:37.246209 7 log.go:181] (0xc002b0a580) Reply frame received for 1 I0204 12:13:37.246235 7 log.go:181] (0xc002b0a580) (0xc000f043c0) Create stream I0204 12:13:37.246246 7 log.go:181] (0xc002b0a580) (0xc000f043c0) Stream added, broadcasting: 3 I0204 12:13:37.246788 7 log.go:181] (0xc002b0a580) Reply frame received for 3 I0204 12:13:37.246802 7 log.go:181] (0xc002b0a580) (0xc001308d20) Create stream I0204 12:13:37.246816 7 log.go:181] (0xc002b0a580) (0xc001308d20) Stream added, broadcasting: 5 I0204 12:13:37.247456 7 log.go:181] (0xc002b0a580) Reply frame received for 5 I0204 12:13:37.303121 7 log.go:181] (0xc002b0a580) Data frame received for 3 I0204 12:13:37.303155 7 log.go:181] (0xc000f043c0) (3) Data frame handling I0204 12:13:37.303168 7 log.go:181] (0xc000f043c0) (3) Data frame sent I0204 12:13:37.303176 7 log.go:181] (0xc002b0a580) Data frame received for 3 I0204 12:13:37.303199 7 log.go:181] (0xc002b0a580) Data frame received for 5 I0204 12:13:37.303236 7 log.go:181] (0xc001308d20) (5) Data frame handling I0204 12:13:37.303269 7 log.go:181] (0xc000f043c0) (3) Data frame handling I0204 12:13:37.304203 7 log.go:181] (0xc002b0a580) Data frame received for 1 I0204 12:13:37.304218 7 log.go:181] (0xc001308be0) (1) Data frame handling I0204 12:13:37.304236 7 log.go:181] (0xc001308be0) (1) Data frame sent I0204 12:13:37.304248 7 log.go:181] (0xc002b0a580) (0xc001308be0) Stream removed, broadcasting: 1 I0204 12:13:37.304301 7 log.go:181] (0xc002b0a580) (0xc001308be0) Stream removed, broadcasting: 1 I0204 12:13:37.304330 7 log.go:181] (0xc002b0a580) (0xc000f043c0) Stream removed, broadcasting: 3 I0204 12:13:37.304369 7 log.go:181] (0xc002b0a580) Go away received I0204 12:13:37.304453 7 log.go:181] (0xc002b0a580) (0xc001308d20) Stream removed, broadcasting: 5 Feb 4 12:13:37.304: INFO: Exec stderr: "" Feb 4 12:13:37.304: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:37.304: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:37.480406 7 log.go:181] (0xc0009d3760) (0xc000f04a00) Create stream I0204 12:13:37.480432 7 log.go:181] (0xc0009d3760) (0xc000f04a00) Stream added, broadcasting: 1 I0204 12:13:37.482324 7 log.go:181] (0xc0009d3760) Reply frame received for 1 I0204 12:13:37.482350 7 log.go:181] (0xc0009d3760) (0xc001308e60) Create stream I0204 12:13:37.482359 7 log.go:181] (0xc0009d3760) (0xc001308e60) Stream added, broadcasting: 3 I0204 12:13:37.482898 7 log.go:181] (0xc0009d3760) Reply frame received for 3 I0204 12:13:37.482921 7 log.go:181] (0xc0009d3760) (0xc0011d1540) Create stream I0204 12:13:37.482931 7 log.go:181] (0xc0009d3760) (0xc0011d1540) Stream added, broadcasting: 5 I0204 12:13:37.483574 7 log.go:181] (0xc0009d3760) Reply frame received for 5 I0204 12:13:37.542628 7 log.go:181] (0xc0009d3760) Data frame received for 5 I0204 12:13:37.542659 7 log.go:181] (0xc0011d1540) (5) Data frame handling I0204 12:13:37.542688 7 log.go:181] (0xc0009d3760) Data frame received for 3 I0204 12:13:37.542707 7 log.go:181] (0xc001308e60) (3) Data frame handling I0204 12:13:37.542719 7 log.go:181] (0xc001308e60) (3) Data frame sent I0204 12:13:37.542736 7 log.go:181] (0xc0009d3760) Data frame received for 3 I0204 12:13:37.542746 7 log.go:181] (0xc001308e60) (3) Data frame handling I0204 12:13:37.544051 7 log.go:181] (0xc0009d3760) Data frame received for 1 I0204 12:13:37.544098 7 log.go:181] (0xc000f04a00) (1) Data frame handling I0204 12:13:37.544113 7 log.go:181] (0xc000f04a00) (1) Data frame sent I0204 12:13:37.544128 7 log.go:181] (0xc0009d3760) (0xc000f04a00) Stream removed, broadcasting: 1 I0204 12:13:37.544157 7 log.go:181] (0xc0009d3760) Go away received I0204 12:13:37.544274 7 log.go:181] (0xc0009d3760) (0xc000f04a00) Stream removed, broadcasting: 1 I0204 12:13:37.544297 7 log.go:181] (0xc0009d3760) (0xc001308e60) Stream removed, broadcasting: 3 I0204 12:13:37.544311 7 log.go:181] (0xc0009d3760) (0xc0011d1540) Stream removed, broadcasting: 5 Feb 4 12:13:37.544: INFO: Exec stderr: "" Feb 4 12:13:37.544: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:37.544: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:37.620286 7 log.go:181] (0xc002ef4a50) (0xc003683ae0) Create stream I0204 12:13:37.620323 7 log.go:181] (0xc002ef4a50) (0xc003683ae0) Stream added, broadcasting: 1 I0204 12:13:37.627420 7 log.go:181] (0xc002ef4a50) Reply frame received for 1 I0204 12:13:37.627449 7 log.go:181] (0xc002ef4a50) (0xc003683b80) Create stream I0204 12:13:37.627458 7 log.go:181] (0xc002ef4a50) (0xc003683b80) Stream added, broadcasting: 3 I0204 12:13:37.628651 7 log.go:181] (0xc002ef4a50) Reply frame received for 3 I0204 12:13:37.628682 7 log.go:181] (0xc002ef4a50) (0xc001308fa0) Create stream I0204 12:13:37.628703 7 log.go:181] (0xc002ef4a50) (0xc001308fa0) Stream added, broadcasting: 5 I0204 12:13:37.629722 7 log.go:181] (0xc002ef4a50) Reply frame received for 5 I0204 12:13:37.678536 7 log.go:181] (0xc002ef4a50) Data frame received for 5 I0204 12:13:37.678639 7 log.go:181] (0xc001308fa0) (5) Data frame handling I0204 12:13:37.678716 7 log.go:181] (0xc002ef4a50) Data frame received for 3 I0204 12:13:37.678740 7 log.go:181] (0xc003683b80) (3) Data frame handling I0204 12:13:37.678758 7 log.go:181] (0xc003683b80) (3) Data frame sent I0204 12:13:37.678780 7 log.go:181] (0xc002ef4a50) Data frame received for 3 I0204 12:13:37.678796 7 log.go:181] (0xc003683b80) (3) Data frame handling I0204 12:13:37.679948 7 log.go:181] (0xc002ef4a50) Data frame received for 1 I0204 12:13:37.679984 7 log.go:181] (0xc003683ae0) (1) Data frame handling I0204 12:13:37.680023 7 log.go:181] (0xc003683ae0) (1) Data frame sent I0204 12:13:37.680048 7 log.go:181] (0xc002ef4a50) (0xc003683ae0) Stream removed, broadcasting: 1 I0204 12:13:37.680077 7 log.go:181] (0xc002ef4a50) Go away received I0204 12:13:37.680130 7 log.go:181] (0xc002ef4a50) (0xc003683ae0) Stream removed, broadcasting: 1 I0204 12:13:37.680150 7 log.go:181] (0xc002ef4a50) (0xc003683b80) Stream removed, broadcasting: 3 I0204 12:13:37.680163 7 log.go:181] (0xc002ef4a50) (0xc001308fa0) Stream removed, broadcasting: 5 Feb 4 12:13:37.680: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 4 12:13:37.680: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:37.680: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:37.735894 7 log.go:181] (0xc002804000) (0xc0003306e0) Create stream I0204 12:13:37.735923 7 log.go:181] (0xc002804000) (0xc0003306e0) Stream added, broadcasting: 1 I0204 12:13:37.738207 7 log.go:181] (0xc002804000) Reply frame received for 1 I0204 12:13:37.738258 7 log.go:181] (0xc002804000) (0xc003683c20) Create stream I0204 12:13:37.738281 7 log.go:181] (0xc002804000) (0xc003683c20) Stream added, broadcasting: 3 I0204 12:13:37.739365 7 log.go:181] (0xc002804000) Reply frame received for 3 I0204 12:13:37.739392 7 log.go:181] (0xc002804000) (0xc000330820) Create stream I0204 12:13:37.739406 7 log.go:181] (0xc002804000) (0xc000330820) Stream added, broadcasting: 5 I0204 12:13:37.740336 7 log.go:181] (0xc002804000) Reply frame received for 5 I0204 12:13:37.795458 7 log.go:181] (0xc002804000) Data frame received for 5 I0204 12:13:37.795481 7 log.go:181] (0xc000330820) (5) Data frame handling I0204 12:13:37.795501 7 log.go:181] (0xc002804000) Data frame received for 3 I0204 12:13:37.795512 7 log.go:181] (0xc003683c20) (3) Data frame handling I0204 12:13:37.795523 7 log.go:181] (0xc003683c20) (3) Data frame sent I0204 12:13:37.795533 7 log.go:181] (0xc002804000) Data frame received for 3 I0204 12:13:37.795545 7 log.go:181] (0xc003683c20) (3) Data frame handling I0204 12:13:37.796400 7 log.go:181] (0xc002804000) Data frame received for 1 I0204 12:13:37.796439 7 log.go:181] (0xc0003306e0) (1) Data frame handling I0204 12:13:37.796463 7 log.go:181] (0xc0003306e0) (1) Data frame sent I0204 12:13:37.796472 7 log.go:181] (0xc002804000) (0xc0003306e0) Stream removed, broadcasting: 1 I0204 12:13:37.796482 7 log.go:181] (0xc002804000) Go away received I0204 12:13:37.796601 7 log.go:181] (0xc002804000) (0xc0003306e0) Stream removed, broadcasting: 1 I0204 12:13:37.796628 7 log.go:181] (0xc002804000) (0xc003683c20) Stream removed, broadcasting: 3 I0204 12:13:37.796642 7 log.go:181] (0xc002804000) (0xc000330820) Stream removed, broadcasting: 5 Feb 4 12:13:37.796: INFO: Exec stderr: "" Feb 4 12:13:37.796: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:37.796: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:37.892232 7 log.go:181] (0xc000ab4840) (0xc001360500) Create stream I0204 12:13:37.892249 7 log.go:181] (0xc000ab4840) (0xc001360500) Stream added, broadcasting: 1 I0204 12:13:37.893975 7 log.go:181] (0xc000ab4840) Reply frame received for 1 I0204 12:13:37.894007 7 log.go:181] (0xc000ab4840) (0xc001360640) Create stream I0204 12:13:37.894017 7 log.go:181] (0xc000ab4840) (0xc001360640) Stream added, broadcasting: 3 I0204 12:13:37.894780 7 log.go:181] (0xc000ab4840) Reply frame received for 3 I0204 12:13:37.894800 7 log.go:181] (0xc000ab4840) (0xc001360780) Create stream I0204 12:13:37.894808 7 log.go:181] (0xc000ab4840) (0xc001360780) Stream added, broadcasting: 5 I0204 12:13:37.895328 7 log.go:181] (0xc000ab4840) Reply frame received for 5 I0204 12:13:37.953482 7 log.go:181] (0xc000ab4840) Data frame received for 3 I0204 12:13:37.953506 7 log.go:181] (0xc001360640) (3) Data frame handling I0204 12:13:37.953513 7 log.go:181] (0xc001360640) (3) Data frame sent I0204 12:13:37.953540 7 log.go:181] (0xc000ab4840) Data frame received for 5 I0204 12:13:37.953586 7 log.go:181] (0xc001360780) (5) Data frame handling I0204 12:13:37.953618 7 log.go:181] (0xc000ab4840) Data frame received for 3 I0204 12:13:37.953631 7 log.go:181] (0xc001360640) (3) Data frame handling I0204 12:13:37.954359 7 log.go:181] (0xc000ab4840) Data frame received for 1 I0204 12:13:37.954375 7 log.go:181] (0xc001360500) (1) Data frame handling I0204 12:13:37.954392 7 log.go:181] (0xc001360500) (1) Data frame sent I0204 12:13:37.954402 7 log.go:181] (0xc000ab4840) (0xc001360500) Stream removed, broadcasting: 1 I0204 12:13:37.954411 7 log.go:181] (0xc000ab4840) Go away received I0204 12:13:37.954488 7 log.go:181] (0xc000ab4840) (0xc001360500) Stream removed, broadcasting: 1 I0204 12:13:37.954504 7 log.go:181] (0xc000ab4840) (0xc001360640) Stream removed, broadcasting: 3 I0204 12:13:37.954518 7 log.go:181] (0xc000ab4840) (0xc001360780) Stream removed, broadcasting: 5 Feb 4 12:13:37.954: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 4 12:13:37.954: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:37.954: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:38.001858 7 log.go:181] (0xc000ab4c60) (0xc001361040) Create stream I0204 12:13:38.001881 7 log.go:181] (0xc000ab4c60) (0xc001361040) Stream added, broadcasting: 1 I0204 12:13:38.003299 7 log.go:181] (0xc000ab4c60) Reply frame received for 1 I0204 12:13:38.003339 7 log.go:181] (0xc000ab4c60) (0xc003683cc0) Create stream I0204 12:13:38.003352 7 log.go:181] (0xc000ab4c60) (0xc003683cc0) Stream added, broadcasting: 3 I0204 12:13:38.003807 7 log.go:181] (0xc000ab4c60) Reply frame received for 3 I0204 12:13:38.003823 7 log.go:181] (0xc000ab4c60) (0xc003683d60) Create stream I0204 12:13:38.003831 7 log.go:181] (0xc000ab4c60) (0xc003683d60) Stream added, broadcasting: 5 I0204 12:13:38.004301 7 log.go:181] (0xc000ab4c60) Reply frame received for 5 I0204 12:13:38.051818 7 log.go:181] (0xc000ab4c60) Data frame received for 5 I0204 12:13:38.051859 7 log.go:181] (0xc003683d60) (5) Data frame handling I0204 12:13:38.051899 7 log.go:181] (0xc000ab4c60) Data frame received for 3 I0204 12:13:38.051909 7 log.go:181] (0xc003683cc0) (3) Data frame handling I0204 12:13:38.051925 7 log.go:181] (0xc003683cc0) (3) Data frame sent I0204 12:13:38.051934 7 log.go:181] (0xc000ab4c60) Data frame received for 3 I0204 12:13:38.051971 7 log.go:181] (0xc003683cc0) (3) Data frame handling I0204 12:13:38.053318 7 log.go:181] (0xc000ab4c60) Data frame received for 1 I0204 12:13:38.053334 7 log.go:181] (0xc001361040) (1) Data frame handling I0204 12:13:38.053343 7 log.go:181] (0xc001361040) (1) Data frame sent I0204 12:13:38.053352 7 log.go:181] (0xc000ab4c60) (0xc001361040) Stream removed, broadcasting: 1 I0204 12:13:38.053444 7 log.go:181] (0xc000ab4c60) Go away received I0204 12:13:38.053517 7 log.go:181] (0xc000ab4c60) (0xc001361040) Stream removed, broadcasting: 1 I0204 12:13:38.053541 7 log.go:181] (0xc000ab4c60) (0xc003683cc0) Stream removed, broadcasting: 3 I0204 12:13:38.053550 7 log.go:181] (0xc000ab4c60) (0xc003683d60) Stream removed, broadcasting: 5 Feb 4 12:13:38.053: INFO: Exec stderr: "" Feb 4 12:13:38.053: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:38.053: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:38.085841 7 log.go:181] (0xc0028046e0) (0xc000330e60) Create stream I0204 12:13:38.085863 7 log.go:181] (0xc0028046e0) (0xc000330e60) Stream added, broadcasting: 1 I0204 12:13:38.087502 7 log.go:181] (0xc0028046e0) Reply frame received for 1 I0204 12:13:38.087527 7 log.go:181] (0xc0028046e0) (0xc003683e00) Create stream I0204 12:13:38.087535 7 log.go:181] (0xc0028046e0) (0xc003683e00) Stream added, broadcasting: 3 I0204 12:13:38.088071 7 log.go:181] (0xc0028046e0) Reply frame received for 3 I0204 12:13:38.088102 7 log.go:181] (0xc0028046e0) (0xc003683ea0) Create stream I0204 12:13:38.088117 7 log.go:181] (0xc0028046e0) (0xc003683ea0) Stream added, broadcasting: 5 I0204 12:13:38.088693 7 log.go:181] (0xc0028046e0) Reply frame received for 5 I0204 12:13:38.153097 7 log.go:181] (0xc0028046e0) Data frame received for 3 I0204 12:13:38.153123 7 log.go:181] (0xc003683e00) (3) Data frame handling I0204 12:13:38.153148 7 log.go:181] (0xc003683e00) (3) Data frame sent I0204 12:13:38.153168 7 log.go:181] (0xc0028046e0) Data frame received for 3 I0204 12:13:38.153184 7 log.go:181] (0xc003683e00) (3) Data frame handling I0204 12:13:38.153198 7 log.go:181] (0xc0028046e0) Data frame received for 5 I0204 12:13:38.153235 7 log.go:181] (0xc003683ea0) (5) Data frame handling I0204 12:13:38.154317 7 log.go:181] (0xc0028046e0) Data frame received for 1 I0204 12:13:38.154399 7 log.go:181] (0xc000330e60) (1) Data frame handling I0204 12:13:38.154461 7 log.go:181] (0xc000330e60) (1) Data frame sent I0204 12:13:38.154499 7 log.go:181] (0xc0028046e0) (0xc000330e60) Stream removed, broadcasting: 1 I0204 12:13:38.154529 7 log.go:181] (0xc0028046e0) Go away received I0204 12:13:38.154578 7 log.go:181] (0xc0028046e0) (0xc000330e60) Stream removed, broadcasting: 1 I0204 12:13:38.154591 7 log.go:181] (0xc0028046e0) (0xc003683e00) Stream removed, broadcasting: 3 I0204 12:13:38.154597 7 log.go:181] (0xc0028046e0) (0xc003683ea0) Stream removed, broadcasting: 5 Feb 4 12:13:38.154: INFO: Exec stderr: "" Feb 4 12:13:38.154: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:38.154: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:38.242609 7 log.go:181] (0xc002804dc0) (0xc000331360) Create stream I0204 12:13:38.242635 7 log.go:181] (0xc002804dc0) (0xc000331360) Stream added, broadcasting: 1 I0204 12:13:38.244395 7 log.go:181] (0xc002804dc0) Reply frame received for 1 I0204 12:13:38.244413 7 log.go:181] (0xc002804dc0) (0xc000f04dc0) Create stream I0204 12:13:38.244419 7 log.go:181] (0xc002804dc0) (0xc000f04dc0) Stream added, broadcasting: 3 I0204 12:13:38.245185 7 log.go:181] (0xc002804dc0) Reply frame received for 3 I0204 12:13:38.245207 7 log.go:181] (0xc002804dc0) (0xc003683f40) Create stream I0204 12:13:38.245216 7 log.go:181] (0xc002804dc0) (0xc003683f40) Stream added, broadcasting: 5 I0204 12:13:38.245848 7 log.go:181] (0xc002804dc0) Reply frame received for 5 I0204 12:13:38.306563 7 log.go:181] (0xc002804dc0) Data frame received for 5 I0204 12:13:38.306596 7 log.go:181] (0xc003683f40) (5) Data frame handling I0204 12:13:38.306629 7 log.go:181] (0xc002804dc0) Data frame received for 3 I0204 12:13:38.306641 7 log.go:181] (0xc000f04dc0) (3) Data frame handling I0204 12:13:38.306652 7 log.go:181] (0xc000f04dc0) (3) Data frame sent I0204 12:13:38.306660 7 log.go:181] (0xc002804dc0) Data frame received for 3 I0204 12:13:38.306669 7 log.go:181] (0xc000f04dc0) (3) Data frame handling I0204 12:13:38.307518 7 log.go:181] (0xc002804dc0) Data frame received for 1 I0204 12:13:38.307538 7 log.go:181] (0xc000331360) (1) Data frame handling I0204 12:13:38.307551 7 log.go:181] (0xc000331360) (1) Data frame sent I0204 12:13:38.307585 7 log.go:181] (0xc002804dc0) (0xc000331360) Stream removed, broadcasting: 1 I0204 12:13:38.307648 7 log.go:181] (0xc002804dc0) Go away received I0204 12:13:38.307675 7 log.go:181] (0xc002804dc0) (0xc000331360) Stream removed, broadcasting: 1 I0204 12:13:38.307690 7 log.go:181] (0xc002804dc0) (0xc000f04dc0) Stream removed, broadcasting: 3 I0204 12:13:38.307744 7 log.go:181] (0xc002804dc0) (0xc003683f40) Stream removed, broadcasting: 5 Feb 4 12:13:38.307: INFO: Exec stderr: "" Feb 4 12:13:38.307: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3481 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 12:13:38.307: INFO: >>> kubeConfig: /root/.kube/config I0204 12:13:38.369694 7 log.go:181] (0xc002b0abb0) (0xc0013095e0) Create stream I0204 12:13:38.369719 7 log.go:181] (0xc002b0abb0) (0xc0013095e0) Stream added, broadcasting: 1 I0204 12:13:38.371738 7 log.go:181] (0xc002b0abb0) Reply frame received for 1 I0204 12:13:38.371770 7 log.go:181] (0xc002b0abb0) (0xc000f04f00) Create stream I0204 12:13:38.371778 7 log.go:181] (0xc002b0abb0) (0xc000f04f00) Stream added, broadcasting: 3 I0204 12:13:38.372487 7 log.go:181] (0xc002b0abb0) Reply frame received for 3 I0204 12:13:38.372513 7 log.go:181] (0xc002b0abb0) (0xc001309720) Create stream I0204 12:13:38.372522 7 log.go:181] (0xc002b0abb0) (0xc001309720) Stream added, broadcasting: 5 I0204 12:13:38.373344 7 log.go:181] (0xc002b0abb0) Reply frame received for 5 I0204 12:13:38.416319 7 log.go:181] (0xc002b0abb0) Data frame received for 5 I0204 12:13:38.416348 7 log.go:181] (0xc001309720) (5) Data frame handling I0204 12:13:38.416384 7 log.go:181] (0xc002b0abb0) Data frame received for 3 I0204 12:13:38.416394 7 log.go:181] (0xc000f04f00) (3) Data frame handling I0204 12:13:38.416405 7 log.go:181] (0xc000f04f00) (3) Data frame sent I0204 12:13:38.416415 7 log.go:181] (0xc002b0abb0) Data frame received for 3 I0204 12:13:38.416423 7 log.go:181] (0xc000f04f00) (3) Data frame handling I0204 12:13:38.417268 7 log.go:181] (0xc002b0abb0) Data frame received for 1 I0204 12:13:38.417278 7 log.go:181] (0xc0013095e0) (1) Data frame handling I0204 12:13:38.417286 7 log.go:181] (0xc0013095e0) (1) Data frame sent I0204 12:13:38.417295 7 log.go:181] (0xc002b0abb0) (0xc0013095e0) Stream removed, broadcasting: 1 I0204 12:13:38.417332 7 log.go:181] (0xc002b0abb0) (0xc0013095e0) Stream removed, broadcasting: 1 I0204 12:13:38.417344 7 log.go:181] (0xc002b0abb0) (0xc000f04f00) Stream removed, broadcasting: 3 I0204 12:13:38.417368 7 log.go:181] (0xc002b0abb0) Go away received I0204 12:13:38.417401 7 log.go:181] (0xc002b0abb0) (0xc001309720) Stream removed, broadcasting: 5 Feb 4 12:13:38.417: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:13:38.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3481" for this suite. • [SLOW TEST:22.777 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":8,"skipped":155,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:13:38.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Feb 4 12:13:38.887: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:13:39.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2538" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":311,"completed":9,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:13:39.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service endpoint-test2 in namespace services-2982 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2982 to expose endpoints map[] Feb 4 12:13:40.031: INFO: successfully validated that service endpoint-test2 in namespace services-2982 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2982 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2982 to expose endpoints map[pod1:[80]] Feb 4 12:13:44.223: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]], will retry Feb 4 12:13:46.613: INFO: successfully validated that service endpoint-test2 in namespace services-2982 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2982 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2982 to expose endpoints map[pod1:[80] pod2:[80]] Feb 4 12:13:50.367: INFO: successfully validated that service endpoint-test2 in namespace services-2982 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2982 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2982 to expose endpoints map[pod2:[80]] Feb 4 12:13:51.014: INFO: successfully validated that service endpoint-test2 in namespace services-2982 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2982 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2982 to expose endpoints map[] Feb 4 12:13:51.670: INFO: successfully validated that service endpoint-test2 in namespace services-2982 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:13:52.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2982" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:13.430 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":311,"completed":10,"skipped":190,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:13:52.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 4 12:13:53.018: INFO: Waiting up to 5m0s for pod "pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e" in namespace "emptydir-2118" to be "Succeeded or Failed" Feb 4 12:13:53.061: INFO: Pod "pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e": Phase="Pending", Reason="", readiness=false. Elapsed: 43.803523ms Feb 4 12:13:55.175: INFO: Pod "pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157422702s Feb 4 12:13:57.289: INFO: Pod "pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271671737s Feb 4 12:13:59.600: INFO: Pod "pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.582548033s STEP: Saw pod success Feb 4 12:13:59.600: INFO: Pod "pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e" satisfied condition "Succeeded or Failed" Feb 4 12:13:59.621: INFO: Trying to get logs from node latest-worker2 pod pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e container test-container: STEP: delete the pod Feb 4 12:13:59.894: INFO: Waiting for pod pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e to disappear Feb 4 12:13:59.917: INFO: Pod pod-ad25dcb2-d682-4b0c-ac3c-152935f9650e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:13:59.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2118" for this suite. • [SLOW TEST:7.277 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":11,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:13:59.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test substitution in volume subpath Feb 4 12:14:00.265: INFO: Waiting up to 5m0s for pod "var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e" in namespace "var-expansion-5838" to be "Succeeded or Failed" Feb 4 12:14:00.319: INFO: Pod "var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 53.880812ms Feb 4 12:14:02.463: INFO: Pod "var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19761081s Feb 4 12:14:04.667: INFO: Pod "var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e": Phase="Running", Reason="", readiness=true. Elapsed: 4.401296078s Feb 4 12:14:06.828: INFO: Pod "var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.562791293s STEP: Saw pod success Feb 4 12:14:06.828: INFO: Pod "var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e" satisfied condition "Succeeded or Failed" Feb 4 12:14:06.909: INFO: Trying to get logs from node latest-worker2 pod var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e container dapi-container: STEP: delete the pod Feb 4 12:14:07.231: INFO: Waiting for pod var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e to disappear Feb 4 12:14:07.290: INFO: Pod var-expansion-c9e7c965-c5eb-46a8-9cdd-dfffc8941d1e no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:14:07.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5838" for this suite. • [SLOW TEST:7.357 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":311,"completed":12,"skipped":212,"failed":0} [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:14:07.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: validating cluster-info Feb 4 12:14:07.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4523 cluster-info' Feb 4 12:14:12.916: INFO: stderr: "" Feb 4 12:14:12.916: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:36371\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:14:12.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4523" for this suite. • [SLOW TEST:5.618 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1064 should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":311,"completed":13,"skipped":212,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:14:12.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-configmap-9xhk STEP: Creating a pod to test atomic-volume-subpath Feb 4 12:14:13.383: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9xhk" in namespace "subpath-3393" to be "Succeeded or Failed" Feb 4 12:14:13.430: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Pending", Reason="", readiness=false. Elapsed: 46.793521ms Feb 4 12:14:15.942: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.559184346s Feb 4 12:14:18.218: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.834561365s Feb 4 12:14:20.235: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 6.851836048s Feb 4 12:14:22.259: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 8.875967482s Feb 4 12:14:24.316: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 10.932910003s Feb 4 12:14:26.381: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 12.997577861s Feb 4 12:14:28.412: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 15.028425196s Feb 4 12:14:30.415: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 17.031505206s Feb 4 12:14:32.420: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 19.036918572s Feb 4 12:14:34.450: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 21.067031128s Feb 4 12:14:36.475: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 23.092158113s Feb 4 12:14:38.481: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 25.098019335s Feb 4 12:14:41.079: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 27.696159118s Feb 4 12:14:44.120: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 30.736586442s Feb 4 12:14:46.479: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 33.095944063s Feb 4 12:14:48.541: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 35.157763472s Feb 4 12:14:50.605: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 37.222068057s Feb 4 12:14:53.201: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Running", Reason="", readiness=true. Elapsed: 39.817462264s Feb 4 12:14:56.134: INFO: Pod "pod-subpath-test-configmap-9xhk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.751008774s STEP: Saw pod success Feb 4 12:14:56.134: INFO: Pod "pod-subpath-test-configmap-9xhk" satisfied condition "Succeeded or Failed" Feb 4 12:14:56.226: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-9xhk container test-container-subpath-configmap-9xhk: STEP: delete the pod Feb 4 12:14:56.529: INFO: Waiting for pod pod-subpath-test-configmap-9xhk to disappear Feb 4 12:14:56.594: INFO: Pod pod-subpath-test-configmap-9xhk no longer exists STEP: Deleting pod pod-subpath-test-configmap-9xhk Feb 4 12:14:56.594: INFO: Deleting pod "pod-subpath-test-configmap-9xhk" in namespace "subpath-3393" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:14:56.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3393" for this suite. • [SLOW TEST:43.798 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":311,"completed":14,"skipped":217,"failed":0} SS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:14:56.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of pod templates Feb 4 12:14:56.983: INFO: created test-podtemplate-1 Feb 4 12:14:57.068: INFO: created test-podtemplate-2 Feb 4 12:14:57.088: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Feb 4 12:14:57.118: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Feb 4 12:14:57.270: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:14:57.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-9864" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":311,"completed":15,"skipped":219,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:14:57.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:14:57.670: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 4 12:15:01.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 create -f -' Feb 4 12:15:11.833: INFO: stderr: "" Feb 4 12:15:11.833: INFO: stdout: "e2e-test-crd-publish-openapi-4528-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 4 12:15:11.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 delete e2e-test-crd-publish-openapi-4528-crds test-cr' Feb 4 12:15:12.080: INFO: stderr: "" Feb 4 12:15:12.080: INFO: stdout: "e2e-test-crd-publish-openapi-4528-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 4 12:15:12.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 apply -f -' Feb 4 12:15:12.430: INFO: stderr: "" Feb 4 12:15:12.430: INFO: stdout: "e2e-test-crd-publish-openapi-4528-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 4 12:15:12.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 --namespace=crd-publish-openapi-74 delete e2e-test-crd-publish-openapi-4528-crds test-cr' Feb 4 12:15:12.619: INFO: stderr: "" Feb 4 12:15:12.619: INFO: stdout: "e2e-test-crd-publish-openapi-4528-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 4 12:15:12.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 explain e2e-test-crd-publish-openapi-4528-crds' Feb 4 12:15:12.914: INFO: stderr: "" Feb 4 12:15:12.914: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4528-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:15:16.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-74" for this suite. • [SLOW TEST:19.247 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":311,"completed":16,"skipped":219,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:15:16.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 12:15:17.811: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 12:15:20.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:15:22.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748037717, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 12:15:25.706: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:15:25.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2294-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:15:27.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4963" for this suite. STEP: Destroying namespace "webhook-4963-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.929 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":311,"completed":17,"skipped":227,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:15:27.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 12:15:27.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3" in namespace "downward-api-1330" to be "Succeeded or Failed" Feb 4 12:15:27.862: INFO: Pod "downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 54.531412ms Feb 4 12:15:29.970: INFO: Pod "downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162165356s Feb 4 12:15:32.009: INFO: Pod "downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201087561s Feb 4 12:15:34.028: INFO: Pod "downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.219875529s STEP: Saw pod success Feb 4 12:15:34.028: INFO: Pod "downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3" satisfied condition "Succeeded or Failed" Feb 4 12:15:34.057: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3 container client-container: STEP: delete the pod Feb 4 12:15:34.328: INFO: Waiting for pod downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3 to disappear Feb 4 12:15:34.363: INFO: Pod downwardapi-volume-bba2793a-a022-460d-83a0-d0a70be5f1a3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:15:34.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1330" for this suite. • [SLOW TEST:6.808 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":311,"completed":18,"skipped":236,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:15:34.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod test-webserver-0da756dd-a498-4ef9-91a8-5c5188677278 in namespace container-probe-6294 Feb 4 12:15:40.786: INFO: Started pod test-webserver-0da756dd-a498-4ef9-91a8-5c5188677278 in namespace container-probe-6294 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 12:15:40.841: INFO: Initial restart count of pod test-webserver-0da756dd-a498-4ef9-91a8-5c5188677278 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:19:41.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6294" for this suite. • [SLOW TEST:249.959 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":311,"completed":19,"skipped":244,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:19:44.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test substitution in container's command Feb 4 12:19:48.852: INFO: Waiting up to 5m0s for pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149" in namespace "var-expansion-2697" to be "Succeeded or Failed" Feb 4 12:19:49.005: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Pending", Reason="", readiness=false. Elapsed: 152.345696ms Feb 4 12:19:51.574: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.721141014s Feb 4 12:19:53.749: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Pending", Reason="", readiness=false. Elapsed: 4.896243939s Feb 4 12:19:55.878: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Pending", Reason="", readiness=false. Elapsed: 7.025585016s Feb 4 12:19:58.041: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Pending", Reason="", readiness=false. Elapsed: 9.188634964s Feb 4 12:20:00.424: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Pending", Reason="", readiness=false. Elapsed: 11.571897943s Feb 4 12:20:02.471: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Pending", Reason="", readiness=false. Elapsed: 13.618475743s Feb 4 12:20:04.538: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.685063974s STEP: Saw pod success Feb 4 12:20:04.538: INFO: Pod "var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149" satisfied condition "Succeeded or Failed" Feb 4 12:20:04.573: INFO: Trying to get logs from node latest-worker2 pod var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149 container dapi-container: STEP: delete the pod Feb 4 12:20:04.760: INFO: Waiting for pod var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149 to disappear Feb 4 12:20:04.812: INFO: Pod var-expansion-1a4b6c7b-8c61-427d-a6d4-3f90df746149 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:20:04.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2697" for this suite. • [SLOW TEST:20.502 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":311,"completed":20,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:20:04.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating Agnhost RC Feb 4 12:20:05.109: INFO: namespace kubectl-9887 Feb 4 12:20:05.109: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 create -f -' Feb 4 12:20:05.633: INFO: stderr: "" Feb 4 12:20:05.633: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Feb 4 12:20:06.770: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 12:20:06.771: INFO: Found 0 / 1 Feb 4 12:20:08.794: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 12:20:08.794: INFO: Found 0 / 1 Feb 4 12:20:10.062: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 12:20:10.062: INFO: Found 0 / 1 Feb 4 12:20:10.669: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 12:20:10.669: INFO: Found 0 / 1 Feb 4 12:20:11.790: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 12:20:11.790: INFO: Found 1 / 1 Feb 4 12:20:11.790: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 4 12:20:11.879: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 12:20:11.879: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 4 12:20:11.879: INFO: wait on agnhost-primary startup in kubectl-9887 Feb 4 12:20:11.879: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 logs agnhost-primary-lnxvj agnhost-primary' Feb 4 12:20:12.119: INFO: stderr: "" Feb 4 12:20:12.119: INFO: stdout: "Paused\n" STEP: exposing RC Feb 4 12:20:12.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Feb 4 12:20:13.127: INFO: stderr: "" Feb 4 12:20:13.127: INFO: stdout: "service/rm2 exposed\n" Feb 4 12:20:13.715: INFO: Service rm2 in namespace kubectl-9887 found. STEP: exposing service Feb 4 12:20:15.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-9887 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Feb 4 12:20:16.155: INFO: stderr: "" Feb 4 12:20:16.155: INFO: stdout: "service/rm3 exposed\n" Feb 4 12:20:16.178: INFO: Service rm3 in namespace kubectl-9887 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:20:18.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9887" for this suite. • [SLOW TEST:13.502 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1229 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":311,"completed":21,"skipped":304,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:20:18.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 4 12:20:18.829: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:18.927: INFO: Number of nodes with available pods: 0 Feb 4 12:20:18.927: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:20:20.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:20.214: INFO: Number of nodes with available pods: 0 Feb 4 12:20:20.214: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:20:20.931: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:21.660: INFO: Number of nodes with available pods: 0 Feb 4 12:20:21.660: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:20:23.289: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:23.839: INFO: Number of nodes with available pods: 0 Feb 4 12:20:23.839: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:20:24.162: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:24.599: INFO: Number of nodes with available pods: 0 Feb 4 12:20:24.599: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:20:24.975: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:24.982: INFO: Number of nodes with available pods: 0 Feb 4 12:20:24.982: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:20:26.665: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:26.715: INFO: Number of nodes with available pods: 2 Feb 4 12:20:26.715: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 4 12:20:27.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:27.753: INFO: Number of nodes with available pods: 1 Feb 4 12:20:27.753: INFO: Node latest-worker2 is running more than one daemon pod Feb 4 12:20:29.317: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:29.393: INFO: Number of nodes with available pods: 1 Feb 4 12:20:29.393: INFO: Node latest-worker2 is running more than one daemon pod Feb 4 12:20:29.819: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:29.822: INFO: Number of nodes with available pods: 1 Feb 4 12:20:29.822: INFO: Node latest-worker2 is running more than one daemon pod Feb 4 12:20:31.419: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:31.443: INFO: Number of nodes with available pods: 1 Feb 4 12:20:31.443: INFO: Node latest-worker2 is running more than one daemon pod Feb 4 12:20:31.770: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:31.783: INFO: Number of nodes with available pods: 1 Feb 4 12:20:31.783: INFO: Node latest-worker2 is running more than one daemon pod Feb 4 12:20:32.796: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 12:20:32.880: INFO: Number of nodes with available pods: 2 Feb 4 12:20:32.880: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-699, will wait for the garbage collector to delete the pods Feb 4 12:20:33.193: INFO: Deleting DaemonSet.extensions daemon-set took: 25.231948ms Feb 4 12:20:33.893: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.173015ms Feb 4 12:21:41.308: INFO: Number of nodes with available pods: 0 Feb 4 12:21:41.308: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 12:21:41.511: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"2036998"},"items":null} Feb 4 12:21:41.528: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2036998"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:21:41.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-699" for this suite. • [SLOW TEST:83.417 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":311,"completed":22,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:21:41.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap configmap-4241/configmap-test-ce6242f0-39c9-4e98-8dd9-a06cd1aa75c4 STEP: Creating a pod to test consume configMaps Feb 4 12:21:41.998: INFO: Waiting up to 5m0s for pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95" in namespace "configmap-4241" to be "Succeeded or Failed" Feb 4 12:21:42.104: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 105.64271ms Feb 4 12:21:44.189: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190801781s Feb 4 12:21:46.767: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.76905746s Feb 4 12:21:48.877: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.879393422s Feb 4 12:21:50.909: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 8.911045398s Feb 4 12:21:54.130: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 12.131899298s Feb 4 12:21:56.321: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Pending", Reason="", readiness=false. Elapsed: 14.322579507s Feb 4 12:21:58.413: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Running", Reason="", readiness=true. Elapsed: 16.414506319s Feb 4 12:22:00.664: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.665626135s STEP: Saw pod success Feb 4 12:22:00.664: INFO: Pod "pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95" satisfied condition "Succeeded or Failed" Feb 4 12:22:01.021: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95 container env-test: STEP: delete the pod Feb 4 12:22:01.299: INFO: Waiting for pod pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95 to disappear Feb 4 12:22:01.363: INFO: Pod pod-configmaps-3667f85a-ea47-41f4-bd6a-0d54a564ff95 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:22:01.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4241" for this suite. • [SLOW TEST:19.629 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":311,"completed":23,"skipped":362,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:22:01.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:22:02.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9377" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":311,"completed":24,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:22:02.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-1107 STEP: creating service affinity-nodeport-transition in namespace services-1107 STEP: creating replication controller affinity-nodeport-transition in namespace services-1107 I0204 12:22:02.456279 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1107, replica count: 3 I0204 12:22:05.506655 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:22:08.506852 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:22:11.507071 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 12:22:11.674: INFO: Creating new exec pod Feb 4 12:22:20.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1107 exec execpod-affinityszsnl -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Feb 4 12:22:21.153: INFO: stderr: "I0204 12:22:21.069995 202 log.go:181] (0xc00003a160) (0xc000a286e0) Create stream\nI0204 12:22:21.070050 202 log.go:181] (0xc00003a160) (0xc000a286e0) Stream added, broadcasting: 1\nI0204 12:22:21.071628 202 log.go:181] (0xc00003a160) Reply frame received for 1\nI0204 12:22:21.071657 202 log.go:181] (0xc00003a160) (0xc000a29540) Create stream\nI0204 12:22:21.071663 202 log.go:181] (0xc00003a160) (0xc000a29540) Stream added, broadcasting: 3\nI0204 12:22:21.072271 202 log.go:181] (0xc00003a160) Reply frame received for 3\nI0204 12:22:21.072299 202 log.go:181] (0xc00003a160) (0xc000a295e0) Create stream\nI0204 12:22:21.072304 202 log.go:181] (0xc00003a160) (0xc000a295e0) Stream added, broadcasting: 5\nI0204 12:22:21.073226 202 log.go:181] (0xc00003a160) Reply frame received for 5\nI0204 12:22:21.149634 202 log.go:181] (0xc00003a160) Data frame received for 3\nI0204 12:22:21.149666 202 log.go:181] (0xc000a29540) (3) Data frame handling\nI0204 12:22:21.149715 202 log.go:181] (0xc00003a160) Data frame received for 5\nI0204 12:22:21.149730 202 log.go:181] (0xc000a295e0) (5) Data frame handling\nI0204 12:22:21.149741 202 log.go:181] (0xc000a295e0) (5) Data frame sent\nI0204 12:22:21.149754 202 log.go:181] (0xc00003a160) Data frame received for 5\nI0204 12:22:21.149761 202 log.go:181] (0xc000a295e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0204 12:22:21.150406 202 log.go:181] (0xc00003a160) Data frame received for 1\nI0204 12:22:21.150420 202 log.go:181] (0xc000a286e0) (1) Data frame handling\nI0204 12:22:21.150433 202 log.go:181] (0xc000a286e0) (1) Data frame sent\nI0204 12:22:21.150442 202 log.go:181] (0xc00003a160) (0xc000a286e0) Stream removed, broadcasting: 1\nI0204 12:22:21.150452 202 log.go:181] (0xc00003a160) Go away received\nI0204 12:22:21.150684 202 log.go:181] (0xc00003a160) (0xc000a286e0) Stream removed, broadcasting: 1\nI0204 12:22:21.150699 202 log.go:181] (0xc00003a160) (0xc000a29540) Stream removed, broadcasting: 3\nI0204 12:22:21.150708 202 log.go:181] (0xc00003a160) (0xc000a295e0) Stream removed, broadcasting: 5\n" Feb 4 12:22:21.153: INFO: stdout: "" Feb 4 12:22:21.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1107 exec execpod-affinityszsnl -- /bin/sh -x -c nc -zv -t -w 2 10.96.152.220 80' Feb 4 12:22:21.411: INFO: stderr: "I0204 12:22:21.349023 217 log.go:181] (0xc000140370) (0xc0003552c0) Create stream\nI0204 12:22:21.349083 217 log.go:181] (0xc000140370) (0xc0003552c0) Stream added, broadcasting: 1\nI0204 12:22:21.350671 217 log.go:181] (0xc000140370) Reply frame received for 1\nI0204 12:22:21.350703 217 log.go:181] (0xc000140370) (0xc000355540) Create stream\nI0204 12:22:21.350712 217 log.go:181] (0xc000140370) (0xc000355540) Stream added, broadcasting: 3\nI0204 12:22:21.351494 217 log.go:181] (0xc000140370) Reply frame received for 3\nI0204 12:22:21.351520 217 log.go:181] (0xc000140370) (0xc00078a140) Create stream\nI0204 12:22:21.351531 217 log.go:181] (0xc000140370) (0xc00078a140) Stream added, broadcasting: 5\nI0204 12:22:21.352952 217 log.go:181] (0xc000140370) Reply frame received for 5\nI0204 12:22:21.406661 217 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:22:21.406688 217 log.go:181] (0xc000355540) (3) Data frame handling\nI0204 12:22:21.406733 217 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:22:21.406743 217 log.go:181] (0xc00078a140) (5) Data frame handling\nI0204 12:22:21.406751 217 log.go:181] (0xc00078a140) (5) Data frame sent\nI0204 12:22:21.406758 217 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:22:21.406766 217 log.go:181] (0xc00078a140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.152.220 80\nConnection to 10.96.152.220 80 port [tcp/http] succeeded!\nI0204 12:22:21.407309 217 log.go:181] (0xc000140370) Data frame received for 1\nI0204 12:22:21.407324 217 log.go:181] (0xc0003552c0) (1) Data frame handling\nI0204 12:22:21.407333 217 log.go:181] (0xc0003552c0) (1) Data frame sent\nI0204 12:22:21.407344 217 log.go:181] (0xc000140370) (0xc0003552c0) Stream removed, broadcasting: 1\nI0204 12:22:21.407401 217 log.go:181] (0xc000140370) Go away received\nI0204 12:22:21.407590 217 log.go:181] (0xc000140370) (0xc0003552c0) Stream removed, broadcasting: 1\nI0204 12:22:21.407606 217 log.go:181] (0xc000140370) (0xc000355540) Stream removed, broadcasting: 3\nI0204 12:22:21.407620 217 log.go:181] (0xc000140370) (0xc00078a140) Stream removed, broadcasting: 5\n" Feb 4 12:22:21.411: INFO: stdout: "" Feb 4 12:22:21.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1107 exec execpod-affinityszsnl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32008' Feb 4 12:22:21.703: INFO: stderr: "I0204 12:22:21.640346 233 log.go:181] (0xc00054edc0) (0xc0005c0820) Create stream\nI0204 12:22:21.640413 233 log.go:181] (0xc00054edc0) (0xc0005c0820) Stream added, broadcasting: 1\nI0204 12:22:21.645071 233 log.go:181] (0xc00054edc0) Reply frame received for 1\nI0204 12:22:21.645105 233 log.go:181] (0xc00054edc0) (0xc0004c5e00) Create stream\nI0204 12:22:21.645121 233 log.go:181] (0xc00054edc0) (0xc0004c5e00) Stream added, broadcasting: 3\nI0204 12:22:21.645773 233 log.go:181] (0xc00054edc0) Reply frame received for 3\nI0204 12:22:21.645793 233 log.go:181] (0xc00054edc0) (0xc000798140) Create stream\nI0204 12:22:21.645800 233 log.go:181] (0xc00054edc0) (0xc000798140) Stream added, broadcasting: 5\nI0204 12:22:21.646500 233 log.go:181] (0xc00054edc0) Reply frame received for 5\nI0204 12:22:21.696951 233 log.go:181] (0xc00054edc0) Data frame received for 3\nI0204 12:22:21.696986 233 log.go:181] (0xc0004c5e00) (3) Data frame handling\nI0204 12:22:21.697049 233 log.go:181] (0xc00054edc0) Data frame received for 5\nI0204 12:22:21.697081 233 log.go:181] (0xc000798140) (5) Data frame handling\nI0204 12:22:21.697103 233 log.go:181] (0xc000798140) (5) Data frame sent\nI0204 12:22:21.697120 233 log.go:181] (0xc00054edc0) Data frame received for 5\nI0204 12:22:21.697130 233 log.go:181] (0xc000798140) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32008\nConnection to 172.18.0.14 32008 port [tcp/*] succeeded!\nI0204 12:22:21.698349 233 log.go:181] (0xc00054edc0) Data frame received for 1\nI0204 12:22:21.698371 233 log.go:181] (0xc0005c0820) (1) Data frame handling\nI0204 12:22:21.698385 233 log.go:181] (0xc0005c0820) (1) Data frame sent\nI0204 12:22:21.698393 233 log.go:181] (0xc00054edc0) (0xc0005c0820) Stream removed, broadcasting: 1\nI0204 12:22:21.698449 233 log.go:181] (0xc00054edc0) Go away received\nI0204 12:22:21.698688 233 log.go:181] (0xc00054edc0) (0xc0005c0820) Stream removed, broadcasting: 1\nI0204 12:22:21.698704 233 log.go:181] (0xc00054edc0) (0xc0004c5e00) Stream removed, broadcasting: 3\nI0204 12:22:21.698710 233 log.go:181] (0xc00054edc0) (0xc000798140) Stream removed, broadcasting: 5\n" Feb 4 12:22:21.703: INFO: stdout: "" Feb 4 12:22:21.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1107 exec execpod-affinityszsnl -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 32008' Feb 4 12:22:24.174: INFO: stderr: "I0204 12:22:24.094637 249 log.go:181] (0xc00003a6e0) (0xc0007ba280) Create stream\nI0204 12:22:24.094677 249 log.go:181] (0xc00003a6e0) (0xc0007ba280) Stream added, broadcasting: 1\nI0204 12:22:24.097747 249 log.go:181] (0xc00003a6e0) Reply frame received for 1\nI0204 12:22:24.097775 249 log.go:181] (0xc00003a6e0) (0xc000478000) Create stream\nI0204 12:22:24.097781 249 log.go:181] (0xc00003a6e0) (0xc000478000) Stream added, broadcasting: 3\nI0204 12:22:24.098338 249 log.go:181] (0xc00003a6e0) Reply frame received for 3\nI0204 12:22:24.098366 249 log.go:181] (0xc00003a6e0) (0xc0004783c0) Create stream\nI0204 12:22:24.098375 249 log.go:181] (0xc00003a6e0) (0xc0004783c0) Stream added, broadcasting: 5\nI0204 12:22:24.098893 249 log.go:181] (0xc00003a6e0) Reply frame received for 5\nI0204 12:22:24.166577 249 log.go:181] (0xc00003a6e0) Data frame received for 5\nI0204 12:22:24.166626 249 log.go:181] (0xc0004783c0) (5) Data frame handling\nI0204 12:22:24.166643 249 log.go:181] (0xc0004783c0) (5) Data frame sent\nI0204 12:22:24.166668 249 log.go:181] (0xc00003a6e0) Data frame received for 5\nI0204 12:22:24.166682 249 log.go:181] (0xc0004783c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 32008\nConnection to 172.18.0.16 32008 port [tcp/*] succeeded!\nI0204 12:22:24.166716 249 log.go:181] (0xc00003a6e0) Data frame received for 3\nI0204 12:22:24.166747 249 log.go:181] (0xc000478000) (3) Data frame handling\nI0204 12:22:24.168072 249 log.go:181] (0xc00003a6e0) Data frame received for 1\nI0204 12:22:24.168101 249 log.go:181] (0xc0007ba280) (1) Data frame handling\nI0204 12:22:24.168134 249 log.go:181] (0xc0007ba280) (1) Data frame sent\nI0204 12:22:24.168163 249 log.go:181] (0xc00003a6e0) (0xc0007ba280) Stream removed, broadcasting: 1\nI0204 12:22:24.168377 249 log.go:181] (0xc00003a6e0) Go away received\nI0204 12:22:24.168615 249 log.go:181] (0xc00003a6e0) (0xc0007ba280) Stream removed, broadcasting: 1\nI0204 12:22:24.168640 249 log.go:181] (0xc00003a6e0) (0xc000478000) Stream removed, broadcasting: 3\nI0204 12:22:24.168656 249 log.go:181] (0xc00003a6e0) (0xc0004783c0) Stream removed, broadcasting: 5\n" Feb 4 12:22:24.174: INFO: stdout: "" Feb 4 12:22:26.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1107 exec execpod-affinityszsnl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32008/ ; done' Feb 4 12:22:27.259: INFO: stderr: "I0204 12:22:27.094981 267 log.go:181] (0xc000a91550) (0xc0007cd680) Create stream\nI0204 12:22:27.095039 267 log.go:181] (0xc000a91550) (0xc0007cd680) Stream added, broadcasting: 1\nI0204 12:22:27.098194 267 log.go:181] (0xc000a91550) Reply frame received for 1\nI0204 12:22:27.098232 267 log.go:181] (0xc000a91550) (0xc0007cc000) Create stream\nI0204 12:22:27.098243 267 log.go:181] (0xc000a91550) (0xc0007cc000) Stream added, broadcasting: 3\nI0204 12:22:27.099405 267 log.go:181] (0xc000a91550) Reply frame received for 3\nI0204 12:22:27.099438 267 log.go:181] (0xc000a91550) (0xc0005ae280) Create stream\nI0204 12:22:27.099448 267 log.go:181] (0xc000a91550) (0xc0005ae280) Stream added, broadcasting: 5\nI0204 12:22:27.100302 267 log.go:181] (0xc000a91550) Reply frame received for 5\nI0204 12:22:27.177513 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.177552 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.177570 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.177592 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.177604 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.177620 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.179923 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.179941 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.179950 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.180636 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.180649 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.180657 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.184712 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.184784 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.184829 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.185457 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.185494 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.185519 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.186079 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.186102 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.186117 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.186143 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.186157 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.186169 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.191646 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.191660 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.191673 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.192215 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.192226 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.192235 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.192465 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.192479 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.192490 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.197552 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.197571 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.197578 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.197588 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.197592 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.197597 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.204308 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.204329 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.204336 267 log.go:181] (0xc0005ae280) (5) Data frame sent\nI0204 12:22:27.204340 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.204344 267 log.go:181] (0xc0005ae280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.204355 267 log.go:181] (0xc0005ae280) (5) Data frame sent\nI0204 12:22:27.204359 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.204363 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.204370 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.204376 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.204382 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.204392 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.209000 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.209010 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.209016 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.209407 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.209420 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.209427 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.209435 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.209440 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.209445 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.212671 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.212679 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.212684 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.213198 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.213211 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.213222 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.216731 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.216742 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.216755 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.217072 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.217084 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.217098 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.217486 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.217498 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.217505 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.217514 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.217518 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.217525 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.221977 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.221996 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.222010 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.222519 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.222535 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.222549 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.222570 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.222584 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.222596 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.227034 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.227058 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.227076 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.227176 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.227187 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.227194 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.227211 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.227231 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.227243 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.233734 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.233755 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.233780 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.234125 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.234139 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.234172 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.234188 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.234200 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.234207 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.237421 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.237440 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.237454 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.237814 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.237837 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.237848 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.237863 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.237869 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.237877 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.240614 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.240624 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.240629 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.241223 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.241239 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.241247 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.241259 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.241265 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.241275 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.244467 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.244481 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.244493 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.244716 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.244728 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.244736 267 log.go:181] (0xc0005ae280) (5) Data frame sent\nI0204 12:22:27.244742 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.244749 267 log.go:181] (0xc0005ae280) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.244762 267 log.go:181] (0xc0005ae280) (5) Data frame sent\nI0204 12:22:27.244772 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.244782 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.244790 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.248052 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.248070 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.248087 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.248560 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.248568 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.248574 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.248579 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.248583 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.248588 267 log.go:181] (0xc0005ae280) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:27.253818 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.253830 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.253838 267 log.go:181] (0xc0007cc000) (3) Data frame sent\nI0204 12:22:27.254329 267 log.go:181] (0xc000a91550) Data frame received for 3\nI0204 12:22:27.254344 267 log.go:181] (0xc0007cc000) (3) Data frame handling\nI0204 12:22:27.254391 267 log.go:181] (0xc000a91550) Data frame received for 5\nI0204 12:22:27.254422 267 log.go:181] (0xc0005ae280) (5) Data frame handling\nI0204 12:22:27.255700 267 log.go:181] (0xc000a91550) Data frame received for 1\nI0204 12:22:27.255712 267 log.go:181] (0xc0007cd680) (1) Data frame handling\nI0204 12:22:27.255721 267 log.go:181] (0xc0007cd680) (1) Data frame sent\nI0204 12:22:27.255734 267 log.go:181] (0xc000a91550) (0xc0007cd680) Stream removed, broadcasting: 1\nI0204 12:22:27.255763 267 log.go:181] (0xc000a91550) Go away received\nI0204 12:22:27.256001 267 log.go:181] (0xc000a91550) (0xc0007cd680) Stream removed, broadcasting: 1\nI0204 12:22:27.256010 267 log.go:181] (0xc000a91550) (0xc0007cc000) Stream removed, broadcasting: 3\nI0204 12:22:27.256014 267 log.go:181] (0xc000a91550) (0xc0005ae280) Stream removed, broadcasting: 5\n" Feb 4 12:22:27.260: INFO: stdout: "\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-9f8fx\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7" Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-9f8fx Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.260: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:27.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1107 exec execpod-affinityszsnl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32008/ ; done' Feb 4 12:22:28.509: INFO: stderr: "I0204 12:22:28.342141 282 log.go:181] (0xc0005ac840) (0xc000aaa3c0) Create stream\nI0204 12:22:28.342230 282 log.go:181] (0xc0005ac840) (0xc000aaa3c0) Stream added, broadcasting: 1\nI0204 12:22:28.345472 282 log.go:181] (0xc0005ac840) Reply frame received for 1\nI0204 12:22:28.345513 282 log.go:181] (0xc0005ac840) (0xc000538000) Create stream\nI0204 12:22:28.345524 282 log.go:181] (0xc0005ac840) (0xc000538000) Stream added, broadcasting: 3\nI0204 12:22:28.346426 282 log.go:181] (0xc0005ac840) Reply frame received for 3\nI0204 12:22:28.346492 282 log.go:181] (0xc0005ac840) (0xc0005380a0) Create stream\nI0204 12:22:28.346515 282 log.go:181] (0xc0005ac840) (0xc0005380a0) Stream added, broadcasting: 5\nI0204 12:22:28.347444 282 log.go:181] (0xc0005ac840) Reply frame received for 5\nI0204 12:22:28.401105 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.401141 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.401156 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.401198 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.401222 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.401244 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.406389 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.406425 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.406450 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.406835 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.406895 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.406924 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.406945 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.406961 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.406978 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.412819 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.412990 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.413037 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.413852 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.413875 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.413898 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.413914 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.413925 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.413935 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.419750 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.419773 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.419788 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.421027 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.421061 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.421091 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curlI0204 12:22:28.421143 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.421162 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.421178 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.421200 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.421212 282 log.go:181] (0xc000538000) (3) Data frame sent\n -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.421224 282 log.go:181] (0xc0005380a0) (5) Data frame sent\nI0204 12:22:28.425911 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.425934 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.425955 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.427172 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.427201 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.427217 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.427237 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.427248 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.427259 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.431160 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.431196 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.431221 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.432212 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.432238 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.432251 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.432272 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.432283 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.432295 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.437036 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.437077 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.437128 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.438240 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.438259 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.438271 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.438284 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.438300 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.438336 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.444407 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.444430 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.444464 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.445232 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.445257 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.445268 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.445283 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.445290 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.445299 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.449490 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.449517 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.449541 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.450174 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.450202 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.450222 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.450257 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.450276 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.450322 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.456219 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.456261 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.456298 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.456954 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.457018 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.457060 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.457089 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.457102 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.457114 282 log.go:181] (0xc0005380a0) (5) Data frame sent\nI0204 12:22:28.457130 282 log.go:181] (0xc0005ac840) Data frame received for 5\n+ echo\n+ I0204 12:22:28.457147 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.457185 282 log.go:181] (0xc0005380a0) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.461149 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.461232 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.461260 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.462685 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.462710 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.462728 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.462754 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.462772 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.462785 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.470058 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.470095 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.470144 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.470848 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.470874 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.470906 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.470925 282 log.go:181] (0xc0005380a0) (5) Data frame sent\nI0204 12:22:28.470944 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.470958 282 log.go:181] (0xc0005380a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.471007 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.471048 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.471071 282 log.go:181] (0xc0005380a0) (5) Data frame sent\nI0204 12:22:28.475751 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.475767 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.475778 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.476815 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.476934 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.476973 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.476986 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.476999 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.477007 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.482620 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.482640 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.482655 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.483737 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.483766 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.483778 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.483794 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.483803 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.483812 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.488275 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.488297 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.488313 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.489293 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.489325 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.489361 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.489403 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.489421 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.489436 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.493845 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.493859 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.493879 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.494887 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.494913 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.494926 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.495021 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.495052 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.495089 282 log.go:181] (0xc0005380a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:28.500633 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.500651 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.500662 282 log.go:181] (0xc000538000) (3) Data frame sent\nI0204 12:22:28.501757 282 log.go:181] (0xc0005ac840) Data frame received for 5\nI0204 12:22:28.501799 282 log.go:181] (0xc0005380a0) (5) Data frame handling\nI0204 12:22:28.501893 282 log.go:181] (0xc0005ac840) Data frame received for 3\nI0204 12:22:28.501923 282 log.go:181] (0xc000538000) (3) Data frame handling\nI0204 12:22:28.503817 282 log.go:181] (0xc0005ac840) Data frame received for 1\nI0204 12:22:28.503837 282 log.go:181] (0xc000aaa3c0) (1) Data frame handling\nI0204 12:22:28.503849 282 log.go:181] (0xc000aaa3c0) (1) Data frame sent\nI0204 12:22:28.503865 282 log.go:181] (0xc0005ac840) (0xc000aaa3c0) Stream removed, broadcasting: 1\nI0204 12:22:28.503883 282 log.go:181] (0xc0005ac840) Go away received\nI0204 12:22:28.504406 282 log.go:181] (0xc0005ac840) (0xc000aaa3c0) Stream removed, broadcasting: 1\nI0204 12:22:28.504427 282 log.go:181] (0xc0005ac840) (0xc000538000) Stream removed, broadcasting: 3\nI0204 12:22:28.504438 282 log.go:181] (0xc0005ac840) (0xc0005380a0) Stream removed, broadcasting: 5\n" Feb 4 12:22:28.510: INFO: stdout: "\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-9f8fx\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-9f8fx\naffinity-nodeport-transition-9f8fx\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-9f8fx\naffinity-nodeport-transition-9f8fx\naffinity-nodeport-transition-9f8fx\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-hcxc7\naffinity-nodeport-transition-8bcm2" Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-9f8fx Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-9f8fx Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-9f8fx Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-9f8fx Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-9f8fx Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-9f8fx Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-hcxc7 Feb 4 12:22:28.510: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1107 exec execpod-affinityszsnl -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32008/ ; done' Feb 4 12:22:58.920: INFO: stderr: "I0204 12:22:58.732951 294 log.go:181] (0xc00003a0b0) (0xc0005cc1e0) Create stream\nI0204 12:22:58.732994 294 log.go:181] (0xc00003a0b0) (0xc0005cc1e0) Stream added, broadcasting: 1\nI0204 12:22:58.734758 294 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0204 12:22:58.734792 294 log.go:181] (0xc00003a0b0) (0xc0005cc320) Create stream\nI0204 12:22:58.734799 294 log.go:181] (0xc00003a0b0) (0xc0005cc320) Stream added, broadcasting: 3\nI0204 12:22:58.735604 294 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0204 12:22:58.735639 294 log.go:181] (0xc00003a0b0) (0xc000e80000) Create stream\nI0204 12:22:58.735655 294 log.go:181] (0xc00003a0b0) (0xc000e80000) Stream added, broadcasting: 5\nI0204 12:22:58.736261 294 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0204 12:22:58.833705 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.833739 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.833752 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.833766 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.833773 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.833781 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.840332 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.840356 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.840367 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.840375 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.840382 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.840409 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.840437 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.840448 294 log.go:181] (0xc000e80000) (5) Data frame sent\nI0204 12:22:58.840460 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.840470 294 log.go:181] (0xc000e80000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\n+ echo\n+ I0204 12:22:58.840491 294 log.go:181] (0xc000e80000) (5) Data frame sent\nI0204 12:22:58.840501 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.840510 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.840519 294 log.go:181] (0xc000e80000) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.840533 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.840550 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.840559 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.840567 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.843405 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.843464 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.843480 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.844652 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.844667 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.844684 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.845224 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.845240 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.845249 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.848339 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.848378 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.848394 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.848404 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.848412 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.848424 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.848441 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.848457 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.848471 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.851369 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.851380 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.851385 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.851842 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.851855 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.851864 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ I0204 12:22:58.851994 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.852002 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.852007 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.852022 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.852039 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.852054 294 log.go:181] (0xc000e80000) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.856274 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.856290 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.856303 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.857018 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.857028 294 log.go:181] (0xc000e80000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.857038 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.857053 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.857062 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.857075 294 log.go:181] (0xc000e80000) (5) Data frame sent\nI0204 12:22:58.860776 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.860789 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.860795 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.861393 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.861409 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.861419 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.861612 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.861632 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.861656 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.864451 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.864475 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.864497 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.865114 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.865128 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.865141 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.865156 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.865164 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.865172 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.868489 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.868499 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.868506 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.869029 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.869056 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.869067 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.869087 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.869099 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.869112 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.873978 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.873995 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.874007 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.875489 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.875510 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.875532 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.875739 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.875788 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.875809 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.879859 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.879880 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.879897 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.880424 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.880445 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.880465 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.880492 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.880505 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.880516 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.886002 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.886023 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.886034 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.889337 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.889351 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.889362 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.889377 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.889400 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.889414 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.893210 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.893225 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.893241 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.893658 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.893676 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.893717 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.893732 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.893743 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.893754 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.897781 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.897792 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.897801 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.898250 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.898261 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.898269 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.898385 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.898404 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.898420 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.903804 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.903821 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.903834 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.904368 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.904395 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.904410 294 log.go:181] (0xc000e80000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32008/\nI0204 12:22:58.904431 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.904442 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.904452 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.910129 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.910150 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.910170 294 log.go:181] (0xc0005cc320) (3) Data frame sent\nI0204 12:22:58.910798 294 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:22:58.910829 294 log.go:181] (0xc0005cc320) (3) Data frame handling\nI0204 12:22:58.910881 294 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:22:58.910895 294 log.go:181] (0xc000e80000) (5) Data frame handling\nI0204 12:22:58.912076 294 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0204 12:22:58.912103 294 log.go:181] (0xc0005cc1e0) (1) Data frame handling\nI0204 12:22:58.912127 294 log.go:181] (0xc0005cc1e0) (1) Data frame sent\nI0204 12:22:58.912232 294 log.go:181] (0xc00003a0b0) (0xc0005cc1e0) Stream removed, broadcasting: 1\nI0204 12:22:58.912313 294 log.go:181] (0xc00003a0b0) Go away received\nI0204 12:22:58.912591 294 log.go:181] (0xc00003a0b0) (0xc0005cc1e0) Stream removed, broadcasting: 1\nI0204 12:22:58.912608 294 log.go:181] (0xc00003a0b0) (0xc0005cc320) Stream removed, broadcasting: 3\nI0204 12:22:58.912616 294 log.go:181] (0xc00003a0b0) (0xc000e80000) Stream removed, broadcasting: 5\n" Feb 4 12:22:58.920: INFO: stdout: "\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2\naffinity-nodeport-transition-8bcm2" Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Received response from host: affinity-nodeport-transition-8bcm2 Feb 4 12:22:58.921: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-1107, will wait for the garbage collector to delete the pods Feb 4 12:23:02.765: INFO: Deleting ReplicationController affinity-nodeport-transition took: 1.897818536s Feb 4 12:23:08.066: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 5.300235656s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:24:54.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1107" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:172.713 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":25,"skipped":387,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:24:54.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:24:55.060: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:24:56.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7852" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":311,"completed":26,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:24:56.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-configmap-jd4x STEP: Creating a pod to test atomic-volume-subpath Feb 4 12:24:56.770: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jd4x" in namespace "subpath-3201" to be "Succeeded or Failed" Feb 4 12:24:56.891: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 120.398262ms Feb 4 12:25:00.460: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689358477s Feb 4 12:25:03.040: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269224957s Feb 4 12:25:05.289: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.518305968s Feb 4 12:25:08.802: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 12.031080256s Feb 4 12:25:12.143: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 15.372587471s Feb 4 12:25:14.205: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 17.434350109s Feb 4 12:25:16.441: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 19.670474835s Feb 4 12:25:19.252: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 22.481770772s Feb 4 12:25:21.370: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Pending", Reason="", readiness=false. Elapsed: 24.599763311s Feb 4 12:25:23.379: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 26.608448843s Feb 4 12:25:25.400: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 28.629514193s Feb 4 12:25:27.423: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 30.652852295s Feb 4 12:25:29.436: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 32.665790389s Feb 4 12:25:31.443: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 34.672869532s Feb 4 12:25:33.461: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 36.690579116s Feb 4 12:25:35.488: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 38.717580747s Feb 4 12:25:37.574: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 40.803636969s Feb 4 12:25:39.581: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 42.81050269s Feb 4 12:25:43.840: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 47.069357505s Feb 4 12:25:45.921: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 49.150760194s Feb 4 12:25:48.363: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 51.592503684s Feb 4 12:25:50.909: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 54.138725186s Feb 4 12:25:53.022: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 56.251649583s Feb 4 12:25:55.527: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 58.756984s Feb 4 12:25:58.143: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.372550386s Feb 4 12:26:00.326: INFO: Pod "pod-subpath-test-configmap-jd4x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m3.55597381s STEP: Saw pod success Feb 4 12:26:00.326: INFO: Pod "pod-subpath-test-configmap-jd4x" satisfied condition "Succeeded or Failed" Feb 4 12:26:01.012: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-jd4x container test-container-subpath-configmap-jd4x: STEP: delete the pod Feb 4 12:26:01.592: INFO: Waiting for pod pod-subpath-test-configmap-jd4x to disappear Feb 4 12:26:01.605: INFO: Pod pod-subpath-test-configmap-jd4x no longer exists STEP: Deleting pod pod-subpath-test-configmap-jd4x Feb 4 12:26:01.605: INFO: Deleting pod "pod-subpath-test-configmap-jd4x" in namespace "subpath-3201" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:26:01.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3201" for this suite. • [SLOW TEST:65.501 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":311,"completed":27,"skipped":475,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:26:01.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 12:26:02.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e" in namespace "projected-16" to be "Succeeded or Failed" Feb 4 12:26:02.045: INFO: Pod "downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.525466ms Feb 4 12:26:04.107: INFO: Pod "downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089733493s Feb 4 12:26:06.133: INFO: Pod "downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115325247s Feb 4 12:26:08.454: INFO: Pod "downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437151425s STEP: Saw pod success Feb 4 12:26:08.454: INFO: Pod "downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e" satisfied condition "Succeeded or Failed" Feb 4 12:26:08.526: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e container client-container: STEP: delete the pod Feb 4 12:26:08.744: INFO: Waiting for pod downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e to disappear Feb 4 12:26:08.789: INFO: Pod downwardapi-volume-c535d184-0d73-4dfc-82fe-30f176e9905e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:26:08.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-16" for this suite. • [SLOW TEST:7.093 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":28,"skipped":487,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:26:08.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:26:17.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1748" for this suite. • [SLOW TEST:8.328 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":311,"completed":29,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:26:17.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-a22e4b51-a8c7-4064-8a4e-f6f227f2e176 STEP: Creating a pod to test consume configMaps Feb 4 12:26:17.460: INFO: Waiting up to 5m0s for pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5" in namespace "configmap-7877" to be "Succeeded or Failed" Feb 4 12:26:17.494: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.475421ms Feb 4 12:26:20.240: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.779648585s Feb 4 12:26:22.514: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053659434s Feb 4 12:26:24.715: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.254798259s Feb 4 12:26:26.784: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.323167012s Feb 4 12:26:29.012: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5": Phase="Running", Reason="", readiness=true. Elapsed: 11.551259093s Feb 4 12:26:31.529: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.068932653s STEP: Saw pod success Feb 4 12:26:31.529: INFO: Pod "pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5" satisfied condition "Succeeded or Failed" Feb 4 12:26:31.857: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5 container agnhost-container: STEP: delete the pod Feb 4 12:26:32.045: INFO: Waiting for pod pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5 to disappear Feb 4 12:26:32.067: INFO: Pod pod-configmaps-aeb7a8cb-8a26-4451-9ea2-c43c2f31a1f5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:26:32.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7877" for this suite. • [SLOW TEST:15.035 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":311,"completed":30,"skipped":506,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:26:32.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name projected-secret-test-8cd32cae-e708-4154-94cd-943aa4a79ea9 STEP: Creating a pod to test consume secrets Feb 4 12:26:32.637: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197" in namespace "projected-2100" to be "Succeeded or Failed" Feb 4 12:26:32.694: INFO: Pod "pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197": Phase="Pending", Reason="", readiness=false. Elapsed: 56.936024ms Feb 4 12:26:36.503: INFO: Pod "pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197": Phase="Pending", Reason="", readiness=false. Elapsed: 3.865719345s Feb 4 12:26:38.574: INFO: Pod "pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197": Phase="Pending", Reason="", readiness=false. Elapsed: 5.937433853s Feb 4 12:26:41.209: INFO: Pod "pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197": Phase="Pending", Reason="", readiness=false. Elapsed: 8.571979264s Feb 4 12:26:44.050: INFO: Pod "pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.413334516s STEP: Saw pod success Feb 4 12:26:44.050: INFO: Pod "pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197" satisfied condition "Succeeded or Failed" Feb 4 12:26:44.092: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197 container secret-volume-test: STEP: delete the pod Feb 4 12:26:48.082: INFO: Waiting for pod pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197 to disappear Feb 4 12:26:48.274: INFO: Pod pod-projected-secrets-9e3fa574-9248-441f-a549-cfcf6df81197 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:26:48.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2100" for this suite. • [SLOW TEST:16.268 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":311,"completed":31,"skipped":513,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:26:48.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:26:48.661: INFO: Creating deployment "test-recreate-deployment" Feb 4 12:26:48.678: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 4 12:26:48.868: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 4 12:26:51.698: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 4 12:26:51.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038409, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-66d46fb5ff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:26:53.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038409, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-66d46fb5ff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:26:56.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038409, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038408, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-66d46fb5ff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:26:57.778: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 4 12:26:57.827: INFO: Updating deployment test-recreate-deployment Feb 4 12:26:57.827: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 4 12:27:00.576: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3474 c315e3f3-25a6-4d75-b729-8305c4cac7b3 2041680 2 2021-02-04 12:26:48 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-04 12:26:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-04 12:27:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000bce808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-02-04 12:27:00 +0000 UTC,LastTransitionTime:2021-02-04 12:27:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2021-02-04 12:27:00 +0000 UTC,LastTransitionTime:2021-02-04 12:26:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Feb 4 12:27:00.730: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-3474 77e1d3d1-d13f-4368-aa09-af980951eb55 2041675 1 2021-02-04 12:26:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment c315e3f3-25a6-4d75-b729-8305c4cac7b3 0xc000bcec60 0xc000bcec61}] [] [{kube-controller-manager Update apps/v1 2021-02-04 12:26:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c315e3f3-25a6-4d75-b729-8305c4cac7b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000bcecd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 4 12:27:00.730: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 4 12:27:00.731: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-66d46fb5ff deployment-3474 e2284fa3-c68a-47bf-83e5-c4fcb9a9f871 2041664 2 2021-02-04 12:26:48 +0000 UTC map[name:sample-pod-3 pod-template-hash:66d46fb5ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment c315e3f3-25a6-4d75-b729-8305c4cac7b3 0xc000bceb67 0xc000bceb68}] [] [{kube-controller-manager Update apps/v1 2021-02-04 12:26:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c315e3f3-25a6-4d75-b729-8305c4cac7b3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 66d46fb5ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:66d46fb5ff] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000bcebf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 4 12:27:00.801: INFO: Pod "test-recreate-deployment-f79dd4667-8l9hv" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-8l9hv test-recreate-deployment-f79dd4667- deployment-3474 a998613d-aead-4871-9603-83448889a930 2041683 0 2021-02-04 12:26:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 77e1d3d1-d13f-4368-aa09-af980951eb55 0xc000bcf0e0 0xc000bcf0e1}] [] [{kube-controller-manager Update v1 2021-02-04 12:26:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77e1d3d1-d13f-4368-aa09-af980951eb55\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 12:27:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7qtfz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7qtfz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7qtfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:26:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:26:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:26:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:26:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 12:26:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:27:00.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3474" for this suite. • [SLOW TEST:13.276 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":311,"completed":32,"skipped":519,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:27:01.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: set up a multi version CRD Feb 4 12:27:02.191: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:27:23.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4605" for this suite. • [SLOW TEST:23.289 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":311,"completed":33,"skipped":526,"failed":0} SSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:27:25.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:27:30.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8412" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:6.030 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":311,"completed":34,"skipped":529,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:27:31.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 in namespace container-probe-7279 Feb 4 12:27:41.568: INFO: Started pod liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 in namespace container-probe-7279 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 12:27:41.602: INFO: Initial restart count of pod liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 is 0 Feb 4 12:28:21.944: INFO: Restart count of pod container-probe-7279/liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 is now 1 (40.342075487s elapsed) Feb 4 12:29:14.030: INFO: Restart count of pod container-probe-7279/liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 is now 2 (1m32.428047054s elapsed) Feb 4 12:29:40.737: INFO: Restart count of pod container-probe-7279/liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 is now 3 (1m59.134773691s elapsed) Feb 4 12:30:14.904: INFO: Restart count of pod container-probe-7279/liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 is now 4 (2m33.301410943s elapsed) Feb 4 12:31:53.667: INFO: Restart count of pod container-probe-7279/liveness-c7d8d500-5990-4b94-81ec-f91cc35f9e39 is now 5 (4m12.06491613s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:31:53.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7279" for this suite. • [SLOW TEST:262.914 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":311,"completed":35,"skipped":537,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:31:53.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-9f31094c-8b15-46a8-bedc-b02969209bb9 STEP: Creating a pod to test consume secrets Feb 4 12:31:59.029: INFO: Waiting up to 5m0s for pod "pod-secrets-17521223-10a6-44ca-b03e-496c2461102b" in namespace "secrets-3357" to be "Succeeded or Failed" Feb 4 12:31:59.840: INFO: Pod "pod-secrets-17521223-10a6-44ca-b03e-496c2461102b": Phase="Pending", Reason="", readiness=false. Elapsed: 811.502287ms Feb 4 12:32:02.233: INFO: Pod "pod-secrets-17521223-10a6-44ca-b03e-496c2461102b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.204803735s Feb 4 12:32:05.044: INFO: Pod "pod-secrets-17521223-10a6-44ca-b03e-496c2461102b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015120437s Feb 4 12:32:07.567: INFO: Pod "pod-secrets-17521223-10a6-44ca-b03e-496c2461102b": Phase="Running", Reason="", readiness=true. Elapsed: 8.538557939s Feb 4 12:32:10.524: INFO: Pod "pod-secrets-17521223-10a6-44ca-b03e-496c2461102b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.495062935s STEP: Saw pod success Feb 4 12:32:10.524: INFO: Pod "pod-secrets-17521223-10a6-44ca-b03e-496c2461102b" satisfied condition "Succeeded or Failed" Feb 4 12:32:10.572: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-17521223-10a6-44ca-b03e-496c2461102b container secret-volume-test: STEP: delete the pod Feb 4 12:32:10.861: INFO: Waiting for pod pod-secrets-17521223-10a6-44ca-b03e-496c2461102b to disappear Feb 4 12:32:10.882: INFO: Pod pod-secrets-17521223-10a6-44ca-b03e-496c2461102b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:32:10.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3357" for this suite. • [SLOW TEST:17.029 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":36,"skipped":537,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:32:11.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 12:32:11.985: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 12:32:16.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038732, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038732, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038732, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038731, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 12:32:18.379: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038732, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038732, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038732, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748038731, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 12:32:21.378: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:32:21.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:32:22.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7817" for this suite. STEP: Destroying namespace "webhook-7817-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.264 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":311,"completed":37,"skipped":544,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:32:23.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:32:23.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-582 version' Feb 4 12:32:24.808: INFO: stderr: "" Feb 4 12:32:24.808: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-alpha.2\", GitCommit:\"bae644e85e085ce668c3edf23e8789fb623331b4\", GitTreeState:\"clean\", BuildDate:\"2021-01-26T18:37:26Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21+\", GitVersion:\"v1.21.0-alpha.0\", GitCommit:\"98bc258bf5516b6c60860e06845b899eab29825d\", GitTreeState:\"clean\", BuildDate:\"2021-01-09T21:29:39Z\", GoVersion:\"go1.15.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:32:24.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-582" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":311,"completed":38,"skipped":557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:32:25.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 4 12:32:26.705: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9440 48f904c7-846b-4ce6-be5b-7ac7fec63962 2046614 0 2021-02-04 12:32:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-04 12:32:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 12:32:26.705: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9440 48f904c7-846b-4ce6-be5b-7ac7fec63962 2046617 0 2021-02-04 12:32:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-04 12:32:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 12:32:26.705: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9440 48f904c7-846b-4ce6-be5b-7ac7fec63962 2046619 0 2021-02-04 12:32:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-04 12:32:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 4 12:32:37.858: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9440 48f904c7-846b-4ce6-be5b-7ac7fec63962 2046855 0 2021-02-04 12:32:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-04 12:32:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 12:32:37.858: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9440 48f904c7-846b-4ce6-be5b-7ac7fec63962 2046859 0 2021-02-04 12:32:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-04 12:32:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 12:32:37.858: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9440 48f904c7-846b-4ce6-be5b-7ac7fec63962 2046862 0 2021-02-04 12:32:25 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2021-02-04 12:32:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:32:37.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9440" for this suite. • [SLOW TEST:12.775 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":311,"completed":39,"skipped":588,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:32:38.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8256 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a new StatefulSet Feb 4 12:32:39.914: INFO: Found 0 stateful pods, waiting for 3 Feb 4 12:32:50.051: INFO: Found 2 stateful pods, waiting for 3 Feb 4 12:32:59.959: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:32:59.959: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:32:59.959: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Feb 4 12:33:09.964: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:33:09.964: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:33:09.964: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 4 12:33:12.876: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 4 12:33:23.616: INFO: Updating stateful set ss2 Feb 4 12:33:23.684: INFO: Waiting for Pod statefulset-8256/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:33:33.765: INFO: Waiting for Pod statefulset-8256/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:33:46.345: INFO: Waiting for Pod statefulset-8256/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Feb 4 12:33:55.924: INFO: Found 2 stateful pods, waiting for 3 Feb 4 12:34:06.856: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:34:06.856: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:34:06.856: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 4 12:34:15.978: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:34:15.978: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 12:34:15.978: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 4 12:34:16.156: INFO: Updating stateful set ss2 Feb 4 12:34:16.260: INFO: Waiting for Pod statefulset-8256/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:34:27.793: INFO: Waiting for Pod statefulset-8256/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:34:37.189: INFO: Waiting for Pod statefulset-8256/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:34:46.269: INFO: Waiting for Pod statefulset-8256/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:34:56.264: INFO: Waiting for Pod statefulset-8256/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:35:06.446: INFO: Updating stateful set ss2 Feb 4 12:35:06.806: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:35:06.806: INFO: Waiting for Pod statefulset-8256/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:35:16.810: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:35:16.810: INFO: Waiting for Pod statefulset-8256/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:35:26.813: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:35:26.813: INFO: Waiting for Pod statefulset-8256/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:35:36.811: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:35:36.811: INFO: Waiting for Pod statefulset-8256/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:35:46.810: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:35:46.810: INFO: Waiting for Pod statefulset-8256/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:35:57.470: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:35:57.470: INFO: Waiting for Pod statefulset-8256/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 12:36:08.394: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:36:17.845: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:36:31.707: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update Feb 4 12:36:38.393: INFO: Waiting for StatefulSet statefulset-8256/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 4 12:36:46.966: INFO: Deleting all statefulset in ns statefulset-8256 Feb 4 12:36:47.039: INFO: Scaling statefulset ss2 to 0 Feb 4 12:39:08.040: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 12:39:08.134: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:39:08.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8256" for this suite. • [SLOW TEST:390.416 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":311,"completed":40,"skipped":607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:39:08.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod busybox-1942c4d4-40d6-4999-bba6-56e8e1d33881 in namespace container-probe-393 Feb 4 12:39:15.214: INFO: Started pod busybox-1942c4d4-40d6-4999-bba6-56e8e1d33881 in namespace container-probe-393 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 12:39:15.604: INFO: Initial restart count of pod busybox-1942c4d4-40d6-4999-bba6-56e8e1d33881 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:43:16.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-393" for this suite. • [SLOW TEST:248.371 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":311,"completed":41,"skipped":656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:43:16.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 4 12:43:22.421: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:43:22.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6229" for this suite. • [SLOW TEST:5.930 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":311,"completed":42,"skipped":692,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:43:22.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:43:23.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5335" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":311,"completed":43,"skipped":714,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:43:23.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test substitution in container's args Feb 4 12:43:23.749: INFO: Waiting up to 5m0s for pod "var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9" in namespace "var-expansion-1865" to be "Succeeded or Failed" Feb 4 12:43:23.800: INFO: Pod "var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9": Phase="Pending", Reason="", readiness=false. Elapsed: 51.264156ms Feb 4 12:43:26.430: INFO: Pod "var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.681488093s Feb 4 12:43:28.498: INFO: Pod "var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.749299722s Feb 4 12:43:30.608: INFO: Pod "var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.859483917s STEP: Saw pod success Feb 4 12:43:30.609: INFO: Pod "var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9" satisfied condition "Succeeded or Failed" Feb 4 12:43:30.719: INFO: Trying to get logs from node latest-worker2 pod var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9 container dapi-container: STEP: delete the pod Feb 4 12:43:30.870: INFO: Waiting for pod var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9 to disappear Feb 4 12:43:30.964: INFO: Pod var-expansion-f2ba62a1-7e79-482b-acc3-812b485977e9 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:43:30.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1865" for this suite. • [SLOW TEST:7.621 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":311,"completed":44,"skipped":718,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:43:30.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Feb 4 12:43:31.278: INFO: >>> kubeConfig: /root/.kube/config Feb 4 12:43:34.909: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:43:49.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2331" for this suite. • [SLOW TEST:18.897 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":311,"completed":45,"skipped":736,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:43:49.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 4 12:43:50.096: INFO: Waiting up to 5m0s for pod "pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635" in namespace "emptydir-3306" to be "Succeeded or Failed" Feb 4 12:43:50.165: INFO: Pod "pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635": Phase="Pending", Reason="", readiness=false. Elapsed: 68.914923ms Feb 4 12:43:52.437: INFO: Pod "pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340932687s Feb 4 12:43:54.660: INFO: Pod "pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.563171624s STEP: Saw pod success Feb 4 12:43:54.660: INFO: Pod "pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635" satisfied condition "Succeeded or Failed" Feb 4 12:43:54.706: INFO: Trying to get logs from node latest-worker2 pod pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635 container test-container: STEP: delete the pod Feb 4 12:43:54.850: INFO: Waiting for pod pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635 to disappear Feb 4 12:43:54.861: INFO: Pod pod-f2ad9efb-94fd-4487-8ff9-3506cfcbc635 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:43:54.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3306" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":46,"skipped":747,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:43:54.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create deployment with httpd image Feb 4 12:43:55.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-9865 create -f -' Feb 4 12:43:59.264: INFO: stderr: "" Feb 4 12:43:59.265: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Feb 4 12:43:59.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-9865 diff -f -' Feb 4 12:43:59.898: INFO: rc: 1 Feb 4 12:43:59.899: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-9865 delete -f -' Feb 4 12:44:00.023: INFO: stderr: "" Feb 4 12:44:00.023: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:44:00.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9865" for this suite. • [SLOW TEST:5.245 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:878 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":311,"completed":47,"skipped":755,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:44:00.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-map-f88cf103-9f5d-4f5b-9b08-a84f6a1a6ff3 STEP: Creating a pod to test consume secrets Feb 4 12:44:00.463: INFO: Waiting up to 5m0s for pod "pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f" in namespace "secrets-8713" to be "Succeeded or Failed" Feb 4 12:44:00.498: INFO: Pod "pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f": Phase="Pending", Reason="", readiness=false. Elapsed: 34.398145ms Feb 4 12:44:02.732: INFO: Pod "pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.268954317s Feb 4 12:44:04.920: INFO: Pod "pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456569165s Feb 4 12:44:06.958: INFO: Pod "pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.494079175s STEP: Saw pod success Feb 4 12:44:06.958: INFO: Pod "pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f" satisfied condition "Succeeded or Failed" Feb 4 12:44:07.072: INFO: Trying to get logs from node latest-worker pod pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f container secret-volume-test: STEP: delete the pod Feb 4 12:44:07.337: INFO: Waiting for pod pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f to disappear Feb 4 12:44:07.360: INFO: Pod pod-secrets-cc85f77a-5cca-44e9-a6e4-11371c57f32f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:44:07.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8713" for this suite. • [SLOW TEST:7.330 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":48,"skipped":773,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:44:07.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test override arguments Feb 4 12:44:07.705: INFO: Waiting up to 5m0s for pod "client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4" in namespace "containers-9588" to be "Succeeded or Failed" Feb 4 12:44:07.780: INFO: Pod "client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 74.636459ms Feb 4 12:44:09.813: INFO: Pod "client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107687299s Feb 4 12:44:11.906: INFO: Pod "client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200522542s Feb 4 12:44:13.985: INFO: Pod "client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.280460233s STEP: Saw pod success Feb 4 12:44:13.986: INFO: Pod "client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4" satisfied condition "Succeeded or Failed" Feb 4 12:44:14.147: INFO: Trying to get logs from node latest-worker pod client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4 container agnhost-container: STEP: delete the pod Feb 4 12:44:14.354: INFO: Waiting for pod client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4 to disappear Feb 4 12:44:14.388: INFO: Pod client-containers-83e96073-368c-4846-a08b-4a3c3c68d2f4 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:44:14.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9588" for this suite. • [SLOW TEST:7.015 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":311,"completed":49,"skipped":795,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:44:14.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0204 12:44:30.874185 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 4 12:45:33.697: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Feb 4 12:45:33.697: INFO: Deleting pod "simpletest-rc-to-be-deleted-28rkm" in namespace "gc-2318" Feb 4 12:45:33.882: INFO: Deleting pod "simpletest-rc-to-be-deleted-h49f5" in namespace "gc-2318" Feb 4 12:45:34.717: INFO: Deleting pod "simpletest-rc-to-be-deleted-jhcx9" in namespace "gc-2318" Feb 4 12:45:35.009: INFO: Deleting pod "simpletest-rc-to-be-deleted-lgbqj" in namespace "gc-2318" Feb 4 12:45:35.697: INFO: Deleting pod "simpletest-rc-to-be-deleted-q49wz" in namespace "gc-2318" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:45:35.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2318" for this suite. • [SLOW TEST:81.655 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":311,"completed":50,"skipped":796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:45:36.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test override all Feb 4 12:45:37.222: INFO: Waiting up to 5m0s for pod "client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c" in namespace "containers-5282" to be "Succeeded or Failed" Feb 4 12:45:37.391: INFO: Pod "client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c": Phase="Pending", Reason="", readiness=false. Elapsed: 168.918929ms Feb 4 12:45:39.442: INFO: Pod "client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220044998s Feb 4 12:45:41.585: INFO: Pod "client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363361375s Feb 4 12:45:43.623: INFO: Pod "client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.401626136s STEP: Saw pod success Feb 4 12:45:43.624: INFO: Pod "client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c" satisfied condition "Succeeded or Failed" Feb 4 12:45:43.704: INFO: Trying to get logs from node latest-worker2 pod client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c container agnhost-container: STEP: delete the pod Feb 4 12:45:44.001: INFO: Waiting for pod client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c to disappear Feb 4 12:45:44.007: INFO: Pod client-containers-52aebcb8-7aaf-4a06-ae58-1f3e131c006c no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:45:44.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5282" for this suite. • [SLOW TEST:7.953 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":311,"completed":51,"skipped":826,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:45:44.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d48530c6-699f-496f-90da-a284e51ad2ed STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d48530c6-699f-496f-90da-a284e51ad2ed STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:45:51.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4679" for this suite. • [SLOW TEST:7.080 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":52,"skipped":844,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:45:51.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 4 12:46:03.873: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 4 12:46:04.008: INFO: Pod pod-with-prestop-exec-hook still exists Feb 4 12:46:06.008: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 4 12:46:06.064: INFO: Pod pod-with-prestop-exec-hook still exists Feb 4 12:46:08.008: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 4 12:46:08.133: INFO: Pod pod-with-prestop-exec-hook still exists Feb 4 12:46:10.008: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 4 12:46:10.039: INFO: Pod pod-with-prestop-exec-hook still exists Feb 4 12:46:12.008: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 4 12:46:12.063: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:46:12.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9830" for this suite. • [SLOW TEST:21.289 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":311,"completed":53,"skipped":849,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:46:12.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4429 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4429 STEP: creating replication controller externalsvc in namespace services-4429 I0204 12:46:12.878710 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4429, replica count: 2 I0204 12:46:15.929282 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:46:18.929513 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Feb 4 12:46:19.935: INFO: Creating new exec pod Feb 4 12:46:26.031: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-4429 exec execpodhph7t -- /bin/sh -x -c nslookup clusterip-service.services-4429.svc.cluster.local' Feb 4 12:46:26.282: INFO: stderr: "I0204 12:46:26.183656 382 log.go:181] (0xc000144370) (0xc0007c66e0) Create stream\nI0204 12:46:26.183711 382 log.go:181] (0xc000144370) (0xc0007c66e0) Stream added, broadcasting: 1\nI0204 12:46:26.186018 382 log.go:181] (0xc000144370) Reply frame received for 1\nI0204 12:46:26.186074 382 log.go:181] (0xc000144370) (0xc0007c6a00) Create stream\nI0204 12:46:26.186091 382 log.go:181] (0xc000144370) (0xc0007c6a00) Stream added, broadcasting: 3\nI0204 12:46:26.186943 382 log.go:181] (0xc000144370) Reply frame received for 3\nI0204 12:46:26.186971 382 log.go:181] (0xc000144370) (0xc0007c7180) Create stream\nI0204 12:46:26.186979 382 log.go:181] (0xc000144370) (0xc0007c7180) Stream added, broadcasting: 5\nI0204 12:46:26.187729 382 log.go:181] (0xc000144370) Reply frame received for 5\nI0204 12:46:26.261496 382 log.go:181] (0xc000144370) Data frame received for 5\nI0204 12:46:26.261522 382 log.go:181] (0xc0007c7180) (5) Data frame handling\nI0204 12:46:26.261538 382 log.go:181] (0xc0007c7180) (5) Data frame sent\n+ nslookup clusterip-service.services-4429.svc.cluster.local\nI0204 12:46:26.270675 382 log.go:181] (0xc000144370) Data frame received for 3\nI0204 12:46:26.270834 382 log.go:181] (0xc0007c6a00) (3) Data frame handling\nI0204 12:46:26.270941 382 log.go:181] (0xc0007c6a00) (3) Data frame sent\nI0204 12:46:26.272737 382 log.go:181] (0xc000144370) Data frame received for 3\nI0204 12:46:26.272761 382 log.go:181] (0xc0007c6a00) (3) Data frame handling\nI0204 12:46:26.272789 382 log.go:181] (0xc0007c6a00) (3) Data frame sent\nI0204 12:46:26.273767 382 log.go:181] (0xc000144370) Data frame received for 5\nI0204 12:46:26.273782 382 log.go:181] (0xc0007c7180) (5) Data frame handling\nI0204 12:46:26.273833 382 log.go:181] (0xc000144370) Data frame received for 3\nI0204 12:46:26.273872 382 log.go:181] (0xc0007c6a00) (3) Data frame handling\nI0204 12:46:26.275996 382 log.go:181] (0xc000144370) Data frame received for 1\nI0204 12:46:26.276020 382 log.go:181] (0xc0007c66e0) (1) Data frame handling\nI0204 12:46:26.276035 382 log.go:181] (0xc0007c66e0) (1) Data frame sent\nI0204 12:46:26.276053 382 log.go:181] (0xc000144370) (0xc0007c66e0) Stream removed, broadcasting: 1\nI0204 12:46:26.276095 382 log.go:181] (0xc000144370) Go away received\nI0204 12:46:26.276508 382 log.go:181] (0xc000144370) (0xc0007c66e0) Stream removed, broadcasting: 1\nI0204 12:46:26.276532 382 log.go:181] (0xc000144370) (0xc0007c6a00) Stream removed, broadcasting: 3\nI0204 12:46:26.276545 382 log.go:181] (0xc000144370) (0xc0007c7180) Stream removed, broadcasting: 5\n" Feb 4 12:46:26.282: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4429.svc.cluster.local\tcanonical name = externalsvc.services-4429.svc.cluster.local.\nName:\texternalsvc.services-4429.svc.cluster.local\nAddress: 10.96.148.96\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4429, will wait for the garbage collector to delete the pods Feb 4 12:46:26.405: INFO: Deleting ReplicationController externalsvc took: 61.595554ms Feb 4 12:46:27.106: INFO: Terminating ReplicationController externalsvc pods took: 700.263107ms Feb 4 12:47:11.426: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:47:11.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4429" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:59.152 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":311,"completed":54,"skipped":863,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:47:11.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service nodeport-test with type=NodePort in namespace services-8494 STEP: creating replication controller nodeport-test in namespace services-8494 I0204 12:47:12.079718 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8494, replica count: 2 I0204 12:47:15.130152 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:47:18.130437 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:47:21.130747 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 12:47:21.130: INFO: Creating new exec pod Feb 4 12:47:28.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8494 exec execpodmvd9v -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Feb 4 12:47:28.684: INFO: stderr: "I0204 12:47:28.616100 399 log.go:181] (0xc000abe000) (0xc000ab6000) Create stream\nI0204 12:47:28.616167 399 log.go:181] (0xc000abe000) (0xc000ab6000) Stream added, broadcasting: 1\nI0204 12:47:28.617888 399 log.go:181] (0xc000abe000) Reply frame received for 1\nI0204 12:47:28.617941 399 log.go:181] (0xc000abe000) (0xc0007e2000) Create stream\nI0204 12:47:28.617959 399 log.go:181] (0xc000abe000) (0xc0007e2000) Stream added, broadcasting: 3\nI0204 12:47:28.618757 399 log.go:181] (0xc000abe000) Reply frame received for 3\nI0204 12:47:28.618783 399 log.go:181] (0xc000abe000) (0xc00054a000) Create stream\nI0204 12:47:28.618803 399 log.go:181] (0xc000abe000) (0xc00054a000) Stream added, broadcasting: 5\nI0204 12:47:28.619507 399 log.go:181] (0xc000abe000) Reply frame received for 5\nI0204 12:47:28.677623 399 log.go:181] (0xc000abe000) Data frame received for 3\nI0204 12:47:28.677674 399 log.go:181] (0xc0007e2000) (3) Data frame handling\nI0204 12:47:28.677753 399 log.go:181] (0xc000abe000) Data frame received for 5\nI0204 12:47:28.677778 399 log.go:181] (0xc00054a000) (5) Data frame handling\nI0204 12:47:28.677803 399 log.go:181] (0xc00054a000) (5) Data frame sent\nI0204 12:47:28.677819 399 log.go:181] (0xc000abe000) Data frame received for 5\nI0204 12:47:28.677833 399 log.go:181] (0xc00054a000) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0204 12:47:28.679150 399 log.go:181] (0xc000abe000) Data frame received for 1\nI0204 12:47:28.679176 399 log.go:181] (0xc000ab6000) (1) Data frame handling\nI0204 12:47:28.679192 399 log.go:181] (0xc000ab6000) (1) Data frame sent\nI0204 12:47:28.679208 399 log.go:181] (0xc000abe000) (0xc000ab6000) Stream removed, broadcasting: 1\nI0204 12:47:28.679288 399 log.go:181] (0xc000abe000) Go away received\nI0204 12:47:28.679608 399 log.go:181] (0xc000abe000) (0xc000ab6000) Stream removed, broadcasting: 1\nI0204 12:47:28.679624 399 log.go:181] (0xc000abe000) (0xc0007e2000) Stream removed, broadcasting: 3\nI0204 12:47:28.679633 399 log.go:181] (0xc000abe000) (0xc00054a000) Stream removed, broadcasting: 5\n" Feb 4 12:47:28.685: INFO: stdout: "" Feb 4 12:47:28.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8494 exec execpodmvd9v -- /bin/sh -x -c nc -zv -t -w 2 10.96.125.195 80' Feb 4 12:47:29.412: INFO: stderr: "I0204 12:47:29.350853 417 log.go:181] (0xc00011b8c0) (0xc000c0a780) Create stream\nI0204 12:47:29.350923 417 log.go:181] (0xc00011b8c0) (0xc000c0a780) Stream added, broadcasting: 1\nI0204 12:47:29.352515 417 log.go:181] (0xc00011b8c0) Reply frame received for 1\nI0204 12:47:29.352565 417 log.go:181] (0xc00011b8c0) (0xc000a30000) Create stream\nI0204 12:47:29.352577 417 log.go:181] (0xc00011b8c0) (0xc000a30000) Stream added, broadcasting: 3\nI0204 12:47:29.353737 417 log.go:181] (0xc00011b8c0) Reply frame received for 3\nI0204 12:47:29.354000 417 log.go:181] (0xc00011b8c0) (0xc000530000) Create stream\nI0204 12:47:29.354025 417 log.go:181] (0xc00011b8c0) (0xc000530000) Stream added, broadcasting: 5\nI0204 12:47:29.355790 417 log.go:181] (0xc00011b8c0) Reply frame received for 5\nI0204 12:47:29.405530 417 log.go:181] (0xc00011b8c0) Data frame received for 5\nI0204 12:47:29.405577 417 log.go:181] (0xc000530000) (5) Data frame handling\nI0204 12:47:29.405591 417 log.go:181] (0xc000530000) (5) Data frame sent\nI0204 12:47:29.405599 417 log.go:181] (0xc00011b8c0) Data frame received for 5\n+ nc -zv -t -w 2 10.96.125.195 80\nConnection to 10.96.125.195 80 port [tcp/http] succeeded!\nI0204 12:47:29.405608 417 log.go:181] (0xc000530000) (5) Data frame handling\nI0204 12:47:29.405708 417 log.go:181] (0xc00011b8c0) Data frame received for 3\nI0204 12:47:29.405742 417 log.go:181] (0xc000a30000) (3) Data frame handling\nI0204 12:47:29.406830 417 log.go:181] (0xc00011b8c0) Data frame received for 1\nI0204 12:47:29.406854 417 log.go:181] (0xc000c0a780) (1) Data frame handling\nI0204 12:47:29.406870 417 log.go:181] (0xc000c0a780) (1) Data frame sent\nI0204 12:47:29.406886 417 log.go:181] (0xc00011b8c0) (0xc000c0a780) Stream removed, broadcasting: 1\nI0204 12:47:29.406920 417 log.go:181] (0xc00011b8c0) Go away received\nI0204 12:47:29.407309 417 log.go:181] (0xc00011b8c0) (0xc000c0a780) Stream removed, broadcasting: 1\nI0204 12:47:29.407324 417 log.go:181] (0xc00011b8c0) (0xc000a30000) Stream removed, broadcasting: 3\nI0204 12:47:29.407333 417 log.go:181] (0xc00011b8c0) (0xc000530000) Stream removed, broadcasting: 5\n" Feb 4 12:47:29.412: INFO: stdout: "" Feb 4 12:47:29.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8494 exec execpodmvd9v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30973' Feb 4 12:47:29.916: INFO: stderr: "I0204 12:47:29.846302 434 log.go:181] (0xc00003b1e0) (0xc0008f63c0) Create stream\nI0204 12:47:29.846378 434 log.go:181] (0xc00003b1e0) (0xc0008f63c0) Stream added, broadcasting: 1\nI0204 12:47:29.847791 434 log.go:181] (0xc00003b1e0) Reply frame received for 1\nI0204 12:47:29.847816 434 log.go:181] (0xc00003b1e0) (0xc0008f6460) Create stream\nI0204 12:47:29.847822 434 log.go:181] (0xc00003b1e0) (0xc0008f6460) Stream added, broadcasting: 3\nI0204 12:47:29.848502 434 log.go:181] (0xc00003b1e0) Reply frame received for 3\nI0204 12:47:29.848532 434 log.go:181] (0xc00003b1e0) (0xc00056e500) Create stream\nI0204 12:47:29.848541 434 log.go:181] (0xc00003b1e0) (0xc00056e500) Stream added, broadcasting: 5\nI0204 12:47:29.849515 434 log.go:181] (0xc00003b1e0) Reply frame received for 5\nI0204 12:47:29.908008 434 log.go:181] (0xc00003b1e0) Data frame received for 3\nI0204 12:47:29.908051 434 log.go:181] (0xc0008f6460) (3) Data frame handling\nI0204 12:47:29.908090 434 log.go:181] (0xc00003b1e0) Data frame received for 5\nI0204 12:47:29.908127 434 log.go:181] (0xc00056e500) (5) Data frame handling\nI0204 12:47:29.908160 434 log.go:181] (0xc00056e500) (5) Data frame sent\nI0204 12:47:29.908184 434 log.go:181] (0xc00003b1e0) Data frame received for 5\nI0204 12:47:29.908202 434 log.go:181] (0xc00056e500) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 30973\nConnection to 172.18.0.14 30973 port [tcp/*] succeeded!\nI0204 12:47:29.909504 434 log.go:181] (0xc00003b1e0) Data frame received for 1\nI0204 12:47:29.909543 434 log.go:181] (0xc0008f63c0) (1) Data frame handling\nI0204 12:47:29.909574 434 log.go:181] (0xc0008f63c0) (1) Data frame sent\nI0204 12:47:29.909596 434 log.go:181] (0xc00003b1e0) (0xc0008f63c0) Stream removed, broadcasting: 1\nI0204 12:47:29.909873 434 log.go:181] (0xc00003b1e0) Go away received\nI0204 12:47:29.910064 434 log.go:181] (0xc00003b1e0) (0xc0008f63c0) Stream removed, broadcasting: 1\nI0204 12:47:29.910088 434 log.go:181] (0xc00003b1e0) (0xc0008f6460) Stream removed, broadcasting: 3\nI0204 12:47:29.910100 434 log.go:181] (0xc00003b1e0) (0xc00056e500) Stream removed, broadcasting: 5\n" Feb 4 12:47:29.916: INFO: stdout: "" Feb 4 12:47:29.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8494 exec execpodmvd9v -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 30973' Feb 4 12:47:30.435: INFO: stderr: "I0204 12:47:30.362474 450 log.go:181] (0xc00056c000) (0xc0005a8000) Create stream\nI0204 12:47:30.362547 450 log.go:181] (0xc00056c000) (0xc0005a8000) Stream added, broadcasting: 1\nI0204 12:47:30.363838 450 log.go:181] (0xc00056c000) Reply frame received for 1\nI0204 12:47:30.363861 450 log.go:181] (0xc00056c000) (0xc000a1a280) Create stream\nI0204 12:47:30.363871 450 log.go:181] (0xc00056c000) (0xc000a1a280) Stream added, broadcasting: 3\nI0204 12:47:30.364469 450 log.go:181] (0xc00056c000) Reply frame received for 3\nI0204 12:47:30.364495 450 log.go:181] (0xc00056c000) (0xc000a1a320) Create stream\nI0204 12:47:30.364501 450 log.go:181] (0xc00056c000) (0xc000a1a320) Stream added, broadcasting: 5\nI0204 12:47:30.365168 450 log.go:181] (0xc00056c000) Reply frame received for 5\nI0204 12:47:30.422515 450 log.go:181] (0xc00056c000) Data frame received for 3\nI0204 12:47:30.422540 450 log.go:181] (0xc000a1a280) (3) Data frame handling\nI0204 12:47:30.422572 450 log.go:181] (0xc00056c000) Data frame received for 5\nI0204 12:47:30.422578 450 log.go:181] (0xc000a1a320) (5) Data frame handling\nI0204 12:47:30.422584 450 log.go:181] (0xc000a1a320) (5) Data frame sent\nI0204 12:47:30.422589 450 log.go:181] (0xc00056c000) Data frame received for 5\nI0204 12:47:30.422593 450 log.go:181] (0xc000a1a320) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 30973\nConnection to 172.18.0.16 30973 port [tcp/*] succeeded!\nI0204 12:47:30.423981 450 log.go:181] (0xc00056c000) Data frame received for 1\nI0204 12:47:30.424006 450 log.go:181] (0xc0005a8000) (1) Data frame handling\nI0204 12:47:30.424026 450 log.go:181] (0xc0005a8000) (1) Data frame sent\nI0204 12:47:30.425166 450 log.go:181] (0xc00056c000) (0xc0005a8000) Stream removed, broadcasting: 1\nI0204 12:47:30.425301 450 log.go:181] (0xc00056c000) Go away received\nI0204 12:47:30.425429 450 log.go:181] (0xc00056c000) (0xc0005a8000) Stream removed, broadcasting: 1\nI0204 12:47:30.425441 450 log.go:181] (0xc00056c000) (0xc000a1a280) Stream removed, broadcasting: 3\nI0204 12:47:30.425447 450 log.go:181] (0xc00056c000) (0xc000a1a320) Stream removed, broadcasting: 5\n" Feb 4 12:47:30.435: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:47:30.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8494" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:19.175 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":311,"completed":55,"skipped":885,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:47:30.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 4 12:47:41.246: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:41.264: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:43.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:43.331: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:45.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:45.295: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:47.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:47.307: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:49.266: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:49.283: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:51.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:51.313: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:53.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:53.276: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:55.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:55.324: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:57.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:57.283: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:47:59.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:47:59.272: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:01.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:01.272: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:03.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:03.296: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:05.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:05.291: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:07.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:07.304: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:09.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:09.327: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:11.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:11.320: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:13.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:13.296: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:15.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:15.270: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:17.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:17.395: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:19.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:19.609: INFO: Pod pod-with-prestop-http-hook still exists Feb 4 12:48:21.265: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 4 12:48:21.338: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:48:21.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-163" for this suite. • [SLOW TEST:50.719 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":311,"completed":56,"skipped":910,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:48:21.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-abecb780-452d-4904-acac-4c0f0b2ee6a3 STEP: Creating a pod to test consume configMaps Feb 4 12:48:21.774: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509" in namespace "configmap-8915" to be "Succeeded or Failed" Feb 4 12:48:21.814: INFO: Pod "pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509": Phase="Pending", Reason="", readiness=false. Elapsed: 39.500926ms Feb 4 12:48:24.147: INFO: Pod "pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372394764s Feb 4 12:48:26.156: INFO: Pod "pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509": Phase="Running", Reason="", readiness=true. Elapsed: 4.38186896s Feb 4 12:48:28.172: INFO: Pod "pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.397773485s STEP: Saw pod success Feb 4 12:48:28.172: INFO: Pod "pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509" satisfied condition "Succeeded or Failed" Feb 4 12:48:28.181: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509 container agnhost-container: STEP: delete the pod Feb 4 12:48:28.336: INFO: Waiting for pod pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509 to disappear Feb 4 12:48:28.410: INFO: Pod pod-configmaps-2b25caa4-2818-43ea-a2db-c78f64be7509 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:48:28.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8915" for this suite. • [SLOW TEST:6.999 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":57,"skipped":946,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:48:28.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-235 STEP: creating service affinity-clusterip-transition in namespace services-235 STEP: creating replication controller affinity-clusterip-transition in namespace services-235 I0204 12:48:28.796870 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-235, replica count: 3 I0204 12:48:31.847296 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:48:34.847548 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 12:48:34.934: INFO: Creating new exec pod Feb 4 12:48:40.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-235 exec execpod-affinitytwcmz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Feb 4 12:48:40.370: INFO: stderr: "I0204 12:48:40.280953 470 log.go:181] (0xc00070a000) (0xc000aae0a0) Create stream\nI0204 12:48:40.280993 470 log.go:181] (0xc00070a000) (0xc000aae0a0) Stream added, broadcasting: 1\nI0204 12:48:40.282225 470 log.go:181] (0xc00070a000) Reply frame received for 1\nI0204 12:48:40.282253 470 log.go:181] (0xc00070a000) (0xc0009b4000) Create stream\nI0204 12:48:40.282260 470 log.go:181] (0xc00070a000) (0xc0009b4000) Stream added, broadcasting: 3\nI0204 12:48:40.282877 470 log.go:181] (0xc00070a000) Reply frame received for 3\nI0204 12:48:40.282917 470 log.go:181] (0xc00070a000) (0xc000aae140) Create stream\nI0204 12:48:40.282935 470 log.go:181] (0xc00070a000) (0xc000aae140) Stream added, broadcasting: 5\nI0204 12:48:40.283594 470 log.go:181] (0xc00070a000) Reply frame received for 5\nI0204 12:48:40.365490 470 log.go:181] (0xc00070a000) Data frame received for 3\nI0204 12:48:40.365516 470 log.go:181] (0xc0009b4000) (3) Data frame handling\nI0204 12:48:40.365539 470 log.go:181] (0xc00070a000) Data frame received for 5\nI0204 12:48:40.365559 470 log.go:181] (0xc000aae140) (5) Data frame handling\nI0204 12:48:40.365570 470 log.go:181] (0xc000aae140) (5) Data frame sent\nI0204 12:48:40.365579 470 log.go:181] (0xc00070a000) Data frame received for 5\nI0204 12:48:40.365586 470 log.go:181] (0xc000aae140) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0204 12:48:40.366567 470 log.go:181] (0xc00070a000) Data frame received for 1\nI0204 12:48:40.366589 470 log.go:181] (0xc000aae0a0) (1) Data frame handling\nI0204 12:48:40.366602 470 log.go:181] (0xc000aae0a0) (1) Data frame sent\nI0204 12:48:40.366676 470 log.go:181] (0xc00070a000) (0xc000aae0a0) Stream removed, broadcasting: 1\nI0204 12:48:40.366820 470 log.go:181] (0xc00070a000) Go away received\nI0204 12:48:40.366964 470 log.go:181] (0xc00070a000) (0xc000aae0a0) Stream removed, broadcasting: 1\nI0204 12:48:40.366977 470 log.go:181] (0xc00070a000) (0xc0009b4000) Stream removed, broadcasting: 3\nI0204 12:48:40.366983 470 log.go:181] (0xc00070a000) (0xc000aae140) Stream removed, broadcasting: 5\n" Feb 4 12:48:40.370: INFO: stdout: "" Feb 4 12:48:40.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-235 exec execpod-affinitytwcmz -- /bin/sh -x -c nc -zv -t -w 2 10.96.22.228 80' Feb 4 12:48:40.642: INFO: stderr: "I0204 12:48:40.556437 488 log.go:181] (0xc00003a0b0) (0xc0005900a0) Create stream\nI0204 12:48:40.556530 488 log.go:181] (0xc00003a0b0) (0xc0005900a0) Stream added, broadcasting: 1\nI0204 12:48:40.558474 488 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0204 12:48:40.558519 488 log.go:181] (0xc00003a0b0) (0xc000e10320) Create stream\nI0204 12:48:40.558531 488 log.go:181] (0xc00003a0b0) (0xc000e10320) Stream added, broadcasting: 3\nI0204 12:48:40.559560 488 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0204 12:48:40.559601 488 log.go:181] (0xc00003a0b0) (0xc000134d20) Create stream\nI0204 12:48:40.559611 488 log.go:181] (0xc00003a0b0) (0xc000134d20) Stream added, broadcasting: 5\nI0204 12:48:40.560481 488 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0204 12:48:40.633510 488 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 12:48:40.633577 488 log.go:181] (0xc000e10320) (3) Data frame handling\nI0204 12:48:40.633647 488 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:48:40.633679 488 log.go:181] (0xc000134d20) (5) Data frame handling\nI0204 12:48:40.633692 488 log.go:181] (0xc000134d20) (5) Data frame sent\nI0204 12:48:40.633702 488 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 12:48:40.633709 488 log.go:181] (0xc000134d20) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.22.228 80\nConnection to 10.96.22.228 80 port [tcp/http] succeeded!\nI0204 12:48:40.635049 488 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0204 12:48:40.635068 488 log.go:181] (0xc0005900a0) (1) Data frame handling\nI0204 12:48:40.635077 488 log.go:181] (0xc0005900a0) (1) Data frame sent\nI0204 12:48:40.635087 488 log.go:181] (0xc00003a0b0) (0xc0005900a0) Stream removed, broadcasting: 1\nI0204 12:48:40.635101 488 log.go:181] (0xc00003a0b0) Go away received\nI0204 12:48:40.635414 488 log.go:181] (0xc00003a0b0) (0xc0005900a0) Stream removed, broadcasting: 1\nI0204 12:48:40.635428 488 log.go:181] (0xc00003a0b0) (0xc000e10320) Stream removed, broadcasting: 3\nI0204 12:48:40.635433 488 log.go:181] (0xc00003a0b0) (0xc000134d20) Stream removed, broadcasting: 5\n" Feb 4 12:48:40.642: INFO: stdout: "" Feb 4 12:48:40.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-235 exec execpod-affinitytwcmz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.22.228:80/ ; done' Feb 4 12:48:40.971: INFO: stderr: "I0204 12:48:40.833770 506 log.go:181] (0xc000140370) (0xc0004b63c0) Create stream\nI0204 12:48:40.833905 506 log.go:181] (0xc000140370) (0xc0004b63c0) Stream added, broadcasting: 1\nI0204 12:48:40.837432 506 log.go:181] (0xc000140370) Reply frame received for 1\nI0204 12:48:40.837464 506 log.go:181] (0xc000140370) (0xc0008f0280) Create stream\nI0204 12:48:40.837472 506 log.go:181] (0xc000140370) (0xc0008f0280) Stream added, broadcasting: 3\nI0204 12:48:40.838320 506 log.go:181] (0xc000140370) Reply frame received for 3\nI0204 12:48:40.838362 506 log.go:181] (0xc000140370) (0xc0004b6c80) Create stream\nI0204 12:48:40.838370 506 log.go:181] (0xc000140370) (0xc0004b6c80) Stream added, broadcasting: 5\nI0204 12:48:40.839404 506 log.go:181] (0xc000140370) Reply frame received for 5\nI0204 12:48:40.881913 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.881939 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.881947 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.881965 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.881982 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.881995 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.885588 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.885601 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.885613 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.886051 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.886065 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.886072 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0204 12:48:40.886079 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.886114 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.886135 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n http://10.96.22.228:80/\nI0204 12:48:40.886155 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.886171 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.886184 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.890100 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.890123 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.890136 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.890818 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.890836 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.890855 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.890864 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.890876 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.890884 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.895815 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.895830 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.895842 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.896313 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.896326 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.896334 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.896351 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.896374 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.896394 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\nI0204 12:48:40.896403 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.896418 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.896456 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\nI0204 12:48:40.899057 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.899077 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.899090 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.899407 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.899420 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.899428 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.899435 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.899443 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.899450 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.903952 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.903971 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.903986 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.904543 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.904555 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.904562 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.904578 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.904589 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.904604 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.908303 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.908323 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.908348 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.908785 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.908806 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\n+ echo\n+ curlI0204 12:48:40.908823 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.908978 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.908996 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.909015 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\nI0204 12:48:40.909029 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.909042 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.909058 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.914463 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.914483 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.914496 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.915370 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.915390 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\n+ echo\n+ curl -qI0204 12:48:40.915401 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.915415 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.915431 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\nI0204 12:48:40.915453 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.915463 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.915472 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.915489 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.919909 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.919927 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.919942 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.920387 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.920404 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.920420 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.920452 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.920474 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.920509 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\nI0204 12:48:40.924352 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.924374 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.924408 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.924821 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.925038 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.925057 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.925076 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.925088 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.925099 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.931149 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.931169 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.931188 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.931745 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.931762 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.931770 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.931793 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.931822 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.931845 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.937380 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.937397 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.937406 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.937802 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.937822 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.937849 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.937861 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.937875 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.937881 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.943113 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.943140 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.943166 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.943912 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.943939 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.943952 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.944002 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.944027 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.944048 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.947746 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.947770 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.947802 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.948203 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.948221 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.948230 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.948240 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.948246 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.948252 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.952571 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.952595 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.952620 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.953418 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.953437 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.953454 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.953477 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.953493 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.953503 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.958662 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.958716 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.958742 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.959440 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.959457 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.959472 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.959484 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.959491 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.959500 506 log.go:181] (0xc0004b6c80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:48:40.962634 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.962654 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.962668 506 log.go:181] (0xc0008f0280) (3) Data frame sent\nI0204 12:48:40.963267 506 log.go:181] (0xc000140370) Data frame received for 5\nI0204 12:48:40.963286 506 log.go:181] (0xc0004b6c80) (5) Data frame handling\nI0204 12:48:40.963357 506 log.go:181] (0xc000140370) Data frame received for 3\nI0204 12:48:40.963374 506 log.go:181] (0xc0008f0280) (3) Data frame handling\nI0204 12:48:40.965194 506 log.go:181] (0xc000140370) Data frame received for 1\nI0204 12:48:40.965216 506 log.go:181] (0xc0004b63c0) (1) Data frame handling\nI0204 12:48:40.965230 506 log.go:181] (0xc0004b63c0) (1) Data frame sent\nI0204 12:48:40.965241 506 log.go:181] (0xc000140370) (0xc0004b63c0) Stream removed, broadcasting: 1\nI0204 12:48:40.965438 506 log.go:181] (0xc000140370) Go away received\nI0204 12:48:40.965617 506 log.go:181] (0xc000140370) (0xc0004b63c0) Stream removed, broadcasting: 1\nI0204 12:48:40.965641 506 log.go:181] (0xc000140370) (0xc0008f0280) Stream removed, broadcasting: 3\nI0204 12:48:40.965653 506 log.go:181] (0xc000140370) (0xc0004b6c80) Stream removed, broadcasting: 5\n" Feb 4 12:48:40.972: INFO: stdout: "\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk" Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:48:40.972: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:10.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-235 exec execpod-affinitytwcmz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.22.228:80/ ; done' Feb 4 12:49:11.785: INFO: stderr: "I0204 12:49:11.634665 524 log.go:181] (0xc000c9d130) (0xc0009a0a00) Create stream\nI0204 12:49:11.634749 524 log.go:181] (0xc000c9d130) (0xc0009a0a00) Stream added, broadcasting: 1\nI0204 12:49:11.638568 524 log.go:181] (0xc000c9d130) Reply frame received for 1\nI0204 12:49:11.638619 524 log.go:181] (0xc000c9d130) (0xc0009a0000) Create stream\nI0204 12:49:11.638637 524 log.go:181] (0xc000c9d130) (0xc0009a0000) Stream added, broadcasting: 3\nI0204 12:49:11.639568 524 log.go:181] (0xc000c9d130) Reply frame received for 3\nI0204 12:49:11.639626 524 log.go:181] (0xc000c9d130) (0xc0009a00a0) Create stream\nI0204 12:49:11.639643 524 log.go:181] (0xc000c9d130) (0xc0009a00a0) Stream added, broadcasting: 5\nI0204 12:49:11.640475 524 log.go:181] (0xc000c9d130) Reply frame received for 5\nI0204 12:49:11.689285 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.689322 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.689360 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.689381 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.689406 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.689431 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.694186 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.694211 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.694232 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.694897 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.694908 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.694914 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.694922 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.694927 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.694932 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.694938 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.694942 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.694954 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.700390 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.700401 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.700406 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.701290 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.701308 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.701317 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.701324 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.701330 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.701340 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.701347 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.701355 524 log.go:181] (0xc0009a0000) (3) Data frame sent\n+ echo\n+ curl -q -sI0204 12:49:11.701401 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.703924 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.703945 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.703955 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.704593 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.704638 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.704673 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.705153 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.705179 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.705194 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.705205 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.705215 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.705228 524 log.go:181] (0xc000c9d130) Data frame received for 3\n+ echo\n+ curl -qI0204 12:49:11.705245 524 log.go:181] (0xc0009a0000) (3) Data frame handling\n -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.705259 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.705273 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.711050 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.711066 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.711075 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.711682 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.711707 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.711726 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.711734 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.711743 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.711754 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.711761 524 log.go:181] (0xc0009a0000) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.711770 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.711896 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.717764 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.717788 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.717801 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.718376 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.718407 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.718421 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.718493 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.718507 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.718524 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.723557 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.723583 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.723605 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.724394 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.724416 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.724427 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.724444 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.724466 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.724487 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.730600 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.730640 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.730687 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.731679 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.731706 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.731742 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.731781 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.731801 524 log.go:181] (0xc000c9d130) Data frame received for 5\n+ echo\nI0204 12:49:11.731823 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.731842 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.731864 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.731886 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.736256 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.736285 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.736310 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.737351 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.737376 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.737399 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.737437 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.737453 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.737476 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.737494 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.737514 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.737546 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.741191 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.741208 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.741217 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.741572 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.741591 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.741604 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.741617 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.741627 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.741652 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.741701 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.741715 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.741728 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.746898 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.746923 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.746942 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.747491 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.747517 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.747529 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.747542 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.747549 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.747556 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.747570 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.747576 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.747620 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.752681 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.752701 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.752714 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.753395 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.753423 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.753442 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.753462 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.753484 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.753497 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.753507 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.753516 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.753538 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\nI0204 12:49:11.759353 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.759385 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.759402 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.760240 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.760260 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.760273 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n+ echo\n+ curl -q -sI0204 12:49:11.760297 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.760309 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.760320 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.760338 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.760346 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.760354 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.764123 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.764148 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.764165 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.764618 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.764647 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.764693 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.764714 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.764733 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.764746 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.768113 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.768142 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.768154 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.768172 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.768180 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.768190 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.768199 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.768208 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.768231 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.771637 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.771658 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.771678 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.772116 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.772168 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.772188 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.772215 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.772226 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.772250 524 log.go:181] (0xc0009a00a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:11.776161 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.776182 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.776201 524 log.go:181] (0xc0009a0000) (3) Data frame sent\nI0204 12:49:11.777007 524 log.go:181] (0xc000c9d130) Data frame received for 5\nI0204 12:49:11.777040 524 log.go:181] (0xc0009a00a0) (5) Data frame handling\nI0204 12:49:11.777081 524 log.go:181] (0xc000c9d130) Data frame received for 3\nI0204 12:49:11.777118 524 log.go:181] (0xc0009a0000) (3) Data frame handling\nI0204 12:49:11.778867 524 log.go:181] (0xc000c9d130) Data frame received for 1\nI0204 12:49:11.778896 524 log.go:181] (0xc0009a0a00) (1) Data frame handling\nI0204 12:49:11.778918 524 log.go:181] (0xc0009a0a00) (1) Data frame sent\nI0204 12:49:11.778936 524 log.go:181] (0xc000c9d130) (0xc0009a0a00) Stream removed, broadcasting: 1\nI0204 12:49:11.778954 524 log.go:181] (0xc000c9d130) Go away received\nI0204 12:49:11.779335 524 log.go:181] (0xc000c9d130) (0xc0009a0a00) Stream removed, broadcasting: 1\nI0204 12:49:11.779364 524 log.go:181] (0xc000c9d130) (0xc0009a0000) Stream removed, broadcasting: 3\nI0204 12:49:11.779383 524 log.go:181] (0xc000c9d130) (0xc0009a00a0) Stream removed, broadcasting: 5\n" Feb 4 12:49:11.787: INFO: stdout: "\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-wfwh2" Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:11.787: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:12.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-235 exec execpod-affinitytwcmz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.22.228:80/ ; done' Feb 4 12:49:12.722: INFO: stderr: "I0204 12:49:12.557846 542 log.go:181] (0xc0009560b0) (0xc00063e780) Create stream\nI0204 12:49:12.557926 542 log.go:181] (0xc0009560b0) (0xc00063e780) Stream added, broadcasting: 1\nI0204 12:49:12.562298 542 log.go:181] (0xc0009560b0) Reply frame received for 1\nI0204 12:49:12.562333 542 log.go:181] (0xc0009560b0) (0xc0007a81e0) Create stream\nI0204 12:49:12.562340 542 log.go:181] (0xc0009560b0) (0xc0007a81e0) Stream added, broadcasting: 3\nI0204 12:49:12.563675 542 log.go:181] (0xc0009560b0) Reply frame received for 3\nI0204 12:49:12.563692 542 log.go:181] (0xc0009560b0) (0xc00063ec80) Create stream\nI0204 12:49:12.563698 542 log.go:181] (0xc0009560b0) (0xc00063ec80) Stream added, broadcasting: 5\nI0204 12:49:12.566082 542 log.go:181] (0xc0009560b0) Reply frame received for 5\nI0204 12:49:12.635960 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.636013 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.636039 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.636073 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.636089 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.636117 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.640323 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.640349 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.640376 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.641130 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.641162 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.641185 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.641222 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.641246 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.641271 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.644659 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.644673 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.644684 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.645313 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.645336 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.645352 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.645375 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.645397 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.645434 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.649549 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.649575 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.649599 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.650187 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.650254 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.650273 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.650288 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.650295 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.650300 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.654750 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.654777 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.654790 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.655241 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.655263 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.655277 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.655295 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.655304 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.655316 542 log.go:181] (0xc00063ec80) (5) Data frame sent\nI0204 12:49:12.655329 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.655340 542 log.go:181] (0xc00063ec80) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.655365 542 log.go:181] (0xc00063ec80) (5) Data frame sent\nI0204 12:49:12.659577 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.659601 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.659622 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.660431 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.660457 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.660466 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.660479 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.660490 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.660497 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.663069 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.663084 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.663096 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.663500 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.663513 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.663535 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0204 12:49:12.663550 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.663560 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.663569 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n 2 http://10.96.22.228:80/\nI0204 12:49:12.663580 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.663586 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.663591 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.666482 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.666500 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.666524 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.666787 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.666818 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.666830 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.666844 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.666851 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.666858 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.670249 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.670267 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.670286 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.670995 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.671038 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.671069 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.671119 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.671141 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.671178 542 log.go:181] (0xc00063ec80) (5) Data frame sent\nI0204 12:49:12.671199 542 log.go:181] (0xc0009560b0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0204 12:49:12.671218 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.671234 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n http://10.96.22.228:80/\nI0204 12:49:12.674659 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.674685 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.674702 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.675316 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.675334 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.675341 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.675348 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.675354 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.675359 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.680816 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.680925 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.680947 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.682169 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.682191 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.682212 542 log.go:181] (0xc00063ec80) (5) Data frame sent\nI0204 12:49:12.682225 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.682232 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.682242 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.682252 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.682264 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.682315 542 log.go:181] (0xc00063ec80) (5) Data frame sent\nI0204 12:49:12.687491 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.687519 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.687541 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.687983 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.688008 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.688018 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.688031 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.688038 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.688046 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.692555 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.692583 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.692610 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.693287 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.693312 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.693335 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.693367 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.693396 542 log.go:181] (0xc00063ec80) (5) Data frame sent\nI0204 12:49:12.693408 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.697984 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.698006 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.698023 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.698782 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.698812 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.698824 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.698842 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.698861 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.698877 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.702557 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.702581 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.702599 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.703036 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.703060 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.703088 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.703113 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.703128 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.703146 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.706820 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.706855 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.706884 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.707196 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.707229 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.707242 542 log.go:181] (0xc00063ec80) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:12.707256 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.707263 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.707274 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.714248 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.714275 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.714295 542 log.go:181] (0xc0007a81e0) (3) Data frame sent\nI0204 12:49:12.714735 542 log.go:181] (0xc0009560b0) Data frame received for 3\nI0204 12:49:12.714750 542 log.go:181] (0xc0007a81e0) (3) Data frame handling\nI0204 12:49:12.715249 542 log.go:181] (0xc0009560b0) Data frame received for 5\nI0204 12:49:12.715262 542 log.go:181] (0xc00063ec80) (5) Data frame handling\nI0204 12:49:12.716782 542 log.go:181] (0xc0009560b0) Data frame received for 1\nI0204 12:49:12.716811 542 log.go:181] (0xc00063e780) (1) Data frame handling\nI0204 12:49:12.716824 542 log.go:181] (0xc00063e780) (1) Data frame sent\nI0204 12:49:12.716911 542 log.go:181] (0xc0009560b0) (0xc00063e780) Stream removed, broadcasting: 1\nI0204 12:49:12.716934 542 log.go:181] (0xc0009560b0) Go away received\nI0204 12:49:12.717308 542 log.go:181] (0xc0009560b0) (0xc00063e780) Stream removed, broadcasting: 1\nI0204 12:49:12.717328 542 log.go:181] (0xc0009560b0) (0xc0007a81e0) Stream removed, broadcasting: 3\nI0204 12:49:12.717337 542 log.go:181] (0xc0009560b0) (0xc00063ec80) Stream removed, broadcasting: 5\n" Feb 4 12:49:12.723: INFO: stdout: "\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-9x259\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-mrlvk\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2" Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:12.723: INFO: Received response from host: affinity-clusterip-transition-9x259 Feb 4 12:49:12.724: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:12.724: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:12.724: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:12.724: INFO: Received response from host: affinity-clusterip-transition-mrlvk Feb 4 12:49:12.724: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:12.724: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:12.724: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:42.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-235 exec execpod-affinitytwcmz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.22.228:80/ ; done' Feb 4 12:49:43.098: INFO: stderr: "I0204 12:49:42.935470 557 log.go:181] (0xc0006c2a50) (0xc000c6a1e0) Create stream\nI0204 12:49:42.935537 557 log.go:181] (0xc0006c2a50) (0xc000c6a1e0) Stream added, broadcasting: 1\nI0204 12:49:42.937592 557 log.go:181] (0xc0006c2a50) Reply frame received for 1\nI0204 12:49:42.937649 557 log.go:181] (0xc0006c2a50) (0xc000a260a0) Create stream\nI0204 12:49:42.937663 557 log.go:181] (0xc0006c2a50) (0xc000a260a0) Stream added, broadcasting: 3\nI0204 12:49:42.938571 557 log.go:181] (0xc0006c2a50) Reply frame received for 3\nI0204 12:49:42.938607 557 log.go:181] (0xc0006c2a50) (0xc0003b6fa0) Create stream\nI0204 12:49:42.938621 557 log.go:181] (0xc0006c2a50) (0xc0003b6fa0) Stream added, broadcasting: 5\nI0204 12:49:42.939499 557 log.go:181] (0xc0006c2a50) Reply frame received for 5\nI0204 12:49:43.010417 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.010453 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.010467 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.010507 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.010521 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.010539 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.015064 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.015086 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.015120 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.015535 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.015571 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.015587 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.015615 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.015628 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.015651 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.020166 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.020186 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.020204 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.020685 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.020714 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.020727 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.020745 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.020754 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.020771 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\nI0204 12:49:43.020789 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.020799 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.020829 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\nI0204 12:49:43.025648 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.025668 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.025685 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.026306 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.026351 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.026384 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.026404 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.026416 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.026442 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.030232 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.030261 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.030290 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.030536 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.030549 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.030559 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\nI0204 12:49:43.030564 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.030569 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.030581 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\nI0204 12:49:43.030794 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.030804 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.030813 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.035064 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.035076 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.035082 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.035637 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.035651 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.035671 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0204 12:49:43.035837 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.035901 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\n http://10.96.22.228:80/\nI0204 12:49:43.035925 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.035943 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.035955 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.035971 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\nI0204 12:49:43.039611 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.039629 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.039647 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.040370 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.040385 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.040410 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.040449 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.040474 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.040504 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\nI0204 12:49:43.044393 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.044428 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.044452 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.044966 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.045045 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.045060 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.045079 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.045089 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.045101 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.048951 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.048971 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.048979 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.049515 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.049564 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.049579 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.049598 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.049611 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.049623 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.053630 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.053675 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.053715 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.054207 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.054236 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.054252 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.054274 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.054283 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.054292 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.058948 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.058966 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.058980 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.059949 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.059984 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.060017 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0204 12:49:43.060080 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.060096 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.060113 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.060125 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.060138 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.060152 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n 2 http://10.96.22.228:80/\nI0204 12:49:43.066005 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.066017 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.066027 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.066602 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.066626 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.066634 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.066668 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.066705 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.066741 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.070952 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.070983 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.071012 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.071591 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.071615 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.071632 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.073261 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.073273 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.073280 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.076241 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.076260 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.076290 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.076792 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.076816 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.076829 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.076914 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.076926 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.076934 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.080055 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.080080 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.080106 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.080144 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.080165 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.080188 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.080314 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.080332 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.080345 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.085075 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.085092 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.085103 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.085673 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.085708 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.085723 557 log.go:181] (0xc0003b6fa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.22.228:80/\nI0204 12:49:43.085747 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.085764 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.085785 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.090047 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.090063 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.090075 557 log.go:181] (0xc000a260a0) (3) Data frame sent\nI0204 12:49:43.090859 557 log.go:181] (0xc0006c2a50) Data frame received for 5\nI0204 12:49:43.090878 557 log.go:181] (0xc0003b6fa0) (5) Data frame handling\nI0204 12:49:43.091065 557 log.go:181] (0xc0006c2a50) Data frame received for 3\nI0204 12:49:43.091078 557 log.go:181] (0xc000a260a0) (3) Data frame handling\nI0204 12:49:43.092620 557 log.go:181] (0xc0006c2a50) Data frame received for 1\nI0204 12:49:43.092660 557 log.go:181] (0xc000c6a1e0) (1) Data frame handling\nI0204 12:49:43.092691 557 log.go:181] (0xc000c6a1e0) (1) Data frame sent\nI0204 12:49:43.092709 557 log.go:181] (0xc0006c2a50) (0xc000c6a1e0) Stream removed, broadcasting: 1\nI0204 12:49:43.092723 557 log.go:181] (0xc0006c2a50) Go away received\nI0204 12:49:43.093110 557 log.go:181] (0xc0006c2a50) (0xc000c6a1e0) Stream removed, broadcasting: 1\nI0204 12:49:43.093123 557 log.go:181] (0xc0006c2a50) (0xc000a260a0) Stream removed, broadcasting: 3\nI0204 12:49:43.093129 557 log.go:181] (0xc0006c2a50) (0xc0003b6fa0) Stream removed, broadcasting: 5\n" Feb 4 12:49:43.099: INFO: stdout: "\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2\naffinity-clusterip-transition-wfwh2" Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Received response from host: affinity-clusterip-transition-wfwh2 Feb 4 12:49:43.099: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-235, will wait for the garbage collector to delete the pods Feb 4 12:49:44.303: INFO: Deleting ReplicationController affinity-clusterip-transition took: 370.345063ms Feb 4 12:49:44.904: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.24349ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:50:21.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-235" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:112.801 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":58,"skipped":950,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:50:21.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:50:21.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1942" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":311,"completed":59,"skipped":1052,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:50:21.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:50:22.072: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1b7b8e5b-250e-4041-9d3d-6280b727b0d7", Controller:(*bool)(0xc0049b2472), BlockOwnerDeletion:(*bool)(0xc0049b2473)}} Feb 4 12:50:22.082: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"677dbadf-e563-4cde-a1cf-44f874f73e14", Controller:(*bool)(0xc003c6520a), BlockOwnerDeletion:(*bool)(0xc003c6520b)}} Feb 4 12:50:22.102: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7a59076f-b18f-46d2-8c9d-b8d3b1faaaa0", Controller:(*bool)(0xc000c8a46a), BlockOwnerDeletion:(*bool)(0xc000c8a46b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:50:27.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2969" for this suite. • [SLOW TEST:5.686 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":311,"completed":60,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:50:27.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:50:28.161: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Pending, waiting for it to be Running (with Ready = true) Feb 4 12:50:30.705: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Pending, waiting for it to be Running (with Ready = true) Feb 4 12:50:32.762: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Pending, waiting for it to be Running (with Ready = true) Feb 4 12:50:34.271: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:36.255: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:38.240: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:40.255: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:42.249: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:44.249: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:46.731: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:48.476: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:50.261: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = false) Feb 4 12:50:52.429: INFO: The status of Pod test-webserver-b9c992cf-9ad4-4d60-a724-cfc62c29c054 is Running (Ready = true) Feb 4 12:50:52.609: INFO: Container started at 2021-02-04 12:50:33 +0000 UTC, pod became ready at 2021-02-04 12:50:50 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:50:52.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9591" for this suite. • [SLOW TEST:25.362 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":311,"completed":61,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:50:52.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-962f9f67-68bc-43d3-857c-fee7852779ae STEP: Creating a pod to test consume secrets Feb 4 12:50:53.378: INFO: Waiting up to 5m0s for pod "pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b" in namespace "secrets-5975" to be "Succeeded or Failed" Feb 4 12:50:53.417: INFO: Pod "pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.738021ms Feb 4 12:50:55.711: INFO: Pod "pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332910868s Feb 4 12:50:57.716: INFO: Pod "pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3380452s Feb 4 12:51:00.081: INFO: Pod "pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.703362336s STEP: Saw pod success Feb 4 12:51:00.081: INFO: Pod "pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b" satisfied condition "Succeeded or Failed" Feb 4 12:51:00.130: INFO: Trying to get logs from node latest-worker pod pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b container secret-volume-test: STEP: delete the pod Feb 4 12:51:01.330: INFO: Waiting for pod pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b to disappear Feb 4 12:51:01.552: INFO: Pod pod-secrets-0d672c09-caa5-4734-8338-f14a56fe5e7b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:51:01.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5975" for this suite. STEP: Destroying namespace "secret-namespace-155" for this suite. • [SLOW TEST:9.642 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":311,"completed":62,"skipped":1093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:51:02.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:51:11.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1555" for this suite. • [SLOW TEST:8.812 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":311,"completed":63,"skipped":1156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:51:11.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a replication controller Feb 4 12:51:11.576: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 create -f -' Feb 4 12:51:11.906: INFO: stderr: "" Feb 4 12:51:11.906: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 4 12:51:11.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:12.042: INFO: stderr: "" Feb 4 12:51:12.042: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " Feb 4 12:51:12.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:51:12.182: INFO: stderr: "" Feb 4 12:51:12.182: INFO: stdout: "" Feb 4 12:51:12.182: INFO: update-demo-nautilus-pcqgq is created but not running Feb 4 12:51:17.182: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:17.476: INFO: stderr: "" Feb 4 12:51:17.476: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " Feb 4 12:51:17.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:51:17.681: INFO: stderr: "" Feb 4 12:51:17.681: INFO: stdout: "" Feb 4 12:51:17.681: INFO: update-demo-nautilus-pcqgq is created but not running Feb 4 12:51:22.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:22.895: INFO: stderr: "" Feb 4 12:51:22.895: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " Feb 4 12:51:22.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:51:23.103: INFO: stderr: "" Feb 4 12:51:23.103: INFO: stdout: "" Feb 4 12:51:23.103: INFO: update-demo-nautilus-pcqgq is created but not running Feb 4 12:51:28.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:28.202: INFO: stderr: "" Feb 4 12:51:28.202: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " Feb 4 12:51:28.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:51:28.383: INFO: stderr: "" Feb 4 12:51:28.383: INFO: stdout: "true" Feb 4 12:51:28.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 12:51:28.583: INFO: stderr: "" Feb 4 12:51:28.583: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 12:51:28.583: INFO: validating pod update-demo-nautilus-pcqgq Feb 4 12:51:28.745: INFO: got data: { "image": "nautilus.jpg" } Feb 4 12:51:28.745: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 12:51:28.745: INFO: update-demo-nautilus-pcqgq is verified up and running Feb 4 12:51:28.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-r6v88 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:51:28.917: INFO: stderr: "" Feb 4 12:51:28.917: INFO: stdout: "true" Feb 4 12:51:28.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-r6v88 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 12:51:29.092: INFO: stderr: "" Feb 4 12:51:29.092: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 12:51:29.092: INFO: validating pod update-demo-nautilus-r6v88 Feb 4 12:51:29.114: INFO: got data: { "image": "nautilus.jpg" } Feb 4 12:51:29.115: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 12:51:29.115: INFO: update-demo-nautilus-r6v88 is verified up and running STEP: scaling down the replication controller Feb 4 12:51:29.118: INFO: scanned /root for discovery docs: Feb 4 12:51:29.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Feb 4 12:51:30.568: INFO: stderr: "" Feb 4 12:51:30.568: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 4 12:51:30.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:30.834: INFO: stderr: "" Feb 4 12:51:30.834: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:51:35.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:35.954: INFO: stderr: "" Feb 4 12:51:35.954: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:51:40.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:41.172: INFO: stderr: "" Feb 4 12:51:41.172: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:51:46.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:46.454: INFO: stderr: "" Feb 4 12:51:46.454: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:51:51.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:51.588: INFO: stderr: "" Feb 4 12:51:51.588: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:51:56.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:51:56.696: INFO: stderr: "" Feb 4 12:51:56.696: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:52:01.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:01.795: INFO: stderr: "" Feb 4 12:52:01.795: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:52:06.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:06.900: INFO: stderr: "" Feb 4 12:52:06.900: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:52:11.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:12.518: INFO: stderr: "" Feb 4 12:52:12.518: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:52:17.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:17.815: INFO: stderr: "" Feb 4 12:52:17.815: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:52:22.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:23.019: INFO: stderr: "" Feb 4 12:52:23.019: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:52:28.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:28.743: INFO: stderr: "" Feb 4 12:52:28.743: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-r6v88 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 4 12:52:33.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:34.495: INFO: stderr: "" Feb 4 12:52:34.495: INFO: stdout: "update-demo-nautilus-pcqgq " Feb 4 12:52:34.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:52:34.708: INFO: stderr: "" Feb 4 12:52:34.708: INFO: stdout: "true" Feb 4 12:52:34.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 12:52:35.446: INFO: stderr: "" Feb 4 12:52:35.446: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 12:52:35.446: INFO: validating pod update-demo-nautilus-pcqgq Feb 4 12:52:35.649: INFO: got data: { "image": "nautilus.jpg" } Feb 4 12:52:35.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 12:52:35.649: INFO: update-demo-nautilus-pcqgq is verified up and running STEP: scaling up the replication controller Feb 4 12:52:35.652: INFO: scanned /root for discovery docs: Feb 4 12:52:35.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Feb 4 12:52:37.570: INFO: stderr: "" Feb 4 12:52:37.570: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 4 12:52:37.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:38.025: INFO: stderr: "" Feb 4 12:52:38.025: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-xvbqw " Feb 4 12:52:38.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:52:38.589: INFO: stderr: "" Feb 4 12:52:38.589: INFO: stdout: "true" Feb 4 12:52:38.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 12:52:39.092: INFO: stderr: "" Feb 4 12:52:39.092: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 12:52:39.092: INFO: validating pod update-demo-nautilus-pcqgq Feb 4 12:52:39.154: INFO: got data: { "image": "nautilus.jpg" } Feb 4 12:52:39.154: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 12:52:39.154: INFO: update-demo-nautilus-pcqgq is verified up and running Feb 4 12:52:39.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-xvbqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:52:39.406: INFO: stderr: "" Feb 4 12:52:39.406: INFO: stdout: "" Feb 4 12:52:39.406: INFO: update-demo-nautilus-xvbqw is created but not running Feb 4 12:52:44.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 12:52:45.112: INFO: stderr: "" Feb 4 12:52:45.112: INFO: stdout: "update-demo-nautilus-pcqgq update-demo-nautilus-xvbqw " Feb 4 12:52:45.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:52:45.742: INFO: stderr: "" Feb 4 12:52:45.742: INFO: stdout: "true" Feb 4 12:52:45.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-pcqgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 12:52:45.964: INFO: stderr: "" Feb 4 12:52:45.964: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 12:52:45.964: INFO: validating pod update-demo-nautilus-pcqgq Feb 4 12:52:46.077: INFO: got data: { "image": "nautilus.jpg" } Feb 4 12:52:46.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 12:52:46.077: INFO: update-demo-nautilus-pcqgq is verified up and running Feb 4 12:52:46.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-xvbqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 12:52:46.256: INFO: stderr: "" Feb 4 12:52:46.256: INFO: stdout: "true" Feb 4 12:52:46.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods update-demo-nautilus-xvbqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 12:52:46.375: INFO: stderr: "" Feb 4 12:52:46.375: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 12:52:46.375: INFO: validating pod update-demo-nautilus-xvbqw Feb 4 12:52:46.379: INFO: got data: { "image": "nautilus.jpg" } Feb 4 12:52:46.379: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 12:52:46.379: INFO: update-demo-nautilus-xvbqw is verified up and running STEP: using delete to clean up resources Feb 4 12:52:46.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 delete --grace-period=0 --force -f -' Feb 4 12:52:46.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 12:52:46.470: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 4 12:52:46.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get rc,svc -l name=update-demo --no-headers' Feb 4 12:52:46.570: INFO: stderr: "No resources found in kubectl-1432 namespace.\n" Feb 4 12:52:46.570: INFO: stdout: "" Feb 4 12:52:46.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1432 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 4 12:52:47.213: INFO: stderr: "" Feb 4 12:52:47.214: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:52:47.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1432" for this suite. • [SLOW TEST:96.125 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":311,"completed":64,"skipped":1185,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:52:47.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-map-3f1e93fa-0249-4f61-99f3-9a25a1f03cd8 STEP: Creating a pod to test consume configMaps Feb 4 12:52:50.465: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212" in namespace "projected-436" to be "Succeeded or Failed" Feb 4 12:52:50.910: INFO: Pod "pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212": Phase="Pending", Reason="", readiness=false. Elapsed: 445.424544ms Feb 4 12:52:53.400: INFO: Pod "pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212": Phase="Pending", Reason="", readiness=false. Elapsed: 2.935273901s Feb 4 12:52:56.042: INFO: Pod "pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212": Phase="Pending", Reason="", readiness=false. Elapsed: 5.577449323s Feb 4 12:52:58.286: INFO: Pod "pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212": Phase="Pending", Reason="", readiness=false. Elapsed: 7.821739817s Feb 4 12:53:00.322: INFO: Pod "pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.857734502s STEP: Saw pod success Feb 4 12:53:00.322: INFO: Pod "pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212" satisfied condition "Succeeded or Failed" Feb 4 12:53:00.325: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212 container agnhost-container: STEP: delete the pod Feb 4 12:53:00.562: INFO: Waiting for pod pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212 to disappear Feb 4 12:53:00.569: INFO: Pod pod-projected-configmaps-97ffaa0b-380d-4c3c-ba24-74e24414b212 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:53:00.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-436" for this suite. • [SLOW TEST:13.195 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":65,"skipped":1193,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:53:00.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test service account token: Feb 4 12:53:00.839: INFO: Waiting up to 5m0s for pod "test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d" in namespace "svcaccounts-4780" to be "Succeeded or Failed" Feb 4 12:53:00.863: INFO: Pod "test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.771817ms Feb 4 12:53:03.021: INFO: Pod "test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181713207s Feb 4 12:53:05.117: INFO: Pod "test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27767233s Feb 4 12:53:07.188: INFO: Pod "test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.349036977s STEP: Saw pod success Feb 4 12:53:07.188: INFO: Pod "test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d" satisfied condition "Succeeded or Failed" Feb 4 12:53:07.460: INFO: Trying to get logs from node latest-worker2 pod test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d container agnhost-container: STEP: delete the pod Feb 4 12:53:09.752: INFO: Waiting for pod test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d to disappear Feb 4 12:53:10.685: INFO: Pod test-pod-a476f98e-d28c-4fe2-84bf-df43a721788d no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:53:10.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4780" for this suite. • [SLOW TEST:10.940 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":311,"completed":66,"skipped":1196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:53:11.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-9649 STEP: creating service affinity-nodeport in namespace services-9649 STEP: creating replication controller affinity-nodeport in namespace services-9649 I0204 12:53:16.554826 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9649, replica count: 3 I0204 12:53:19.605269 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:53:22.605490 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:53:25.605726 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:53:28.605946 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 12:53:31.606165 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 12:53:32.140: INFO: Creating new exec pod Feb 4 12:53:41.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9649 exec execpod-affinitysmlgm -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Feb 4 12:53:41.512: INFO: stderr: "I0204 12:53:41.434291 1301 log.go:181] (0xc00003ac60) (0xc000ab41e0) Create stream\nI0204 12:53:41.434360 1301 log.go:181] (0xc00003ac60) (0xc000ab41e0) Stream added, broadcasting: 1\nI0204 12:53:41.436209 1301 log.go:181] (0xc00003ac60) Reply frame received for 1\nI0204 12:53:41.436275 1301 log.go:181] (0xc00003ac60) (0xc00070e8c0) Create stream\nI0204 12:53:41.436307 1301 log.go:181] (0xc00003ac60) (0xc00070e8c0) Stream added, broadcasting: 3\nI0204 12:53:41.437313 1301 log.go:181] (0xc00003ac60) Reply frame received for 3\nI0204 12:53:41.437358 1301 log.go:181] (0xc00003ac60) (0xc00070f040) Create stream\nI0204 12:53:41.437376 1301 log.go:181] (0xc00003ac60) (0xc00070f040) Stream added, broadcasting: 5\nI0204 12:53:41.438305 1301 log.go:181] (0xc00003ac60) Reply frame received for 5\nI0204 12:53:41.504938 1301 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:41.504974 1301 log.go:181] (0xc00070f040) (5) Data frame handling\nI0204 12:53:41.504984 1301 log.go:181] (0xc00070f040) (5) Data frame sent\nI0204 12:53:41.504991 1301 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:41.504996 1301 log.go:181] (0xc00070f040) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0204 12:53:41.505045 1301 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:41.505058 1301 log.go:181] (0xc00070e8c0) (3) Data frame handling\nI0204 12:53:41.506704 1301 log.go:181] (0xc00003ac60) Data frame received for 1\nI0204 12:53:41.506721 1301 log.go:181] (0xc000ab41e0) (1) Data frame handling\nI0204 12:53:41.506732 1301 log.go:181] (0xc000ab41e0) (1) Data frame sent\nI0204 12:53:41.506742 1301 log.go:181] (0xc00003ac60) (0xc000ab41e0) Stream removed, broadcasting: 1\nI0204 12:53:41.506755 1301 log.go:181] (0xc00003ac60) Go away received\nI0204 12:53:41.507171 1301 log.go:181] (0xc00003ac60) (0xc000ab41e0) Stream removed, broadcasting: 1\nI0204 12:53:41.507183 1301 log.go:181] (0xc00003ac60) (0xc00070e8c0) Stream removed, broadcasting: 3\nI0204 12:53:41.507189 1301 log.go:181] (0xc00003ac60) (0xc00070f040) Stream removed, broadcasting: 5\n" Feb 4 12:53:41.512: INFO: stdout: "" Feb 4 12:53:41.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9649 exec execpod-affinitysmlgm -- /bin/sh -x -c nc -zv -t -w 2 10.96.89.63 80' Feb 4 12:53:41.716: INFO: stderr: "I0204 12:53:41.655327 1319 log.go:181] (0xc00003aa50) (0xc0009d8500) Create stream\nI0204 12:53:41.655374 1319 log.go:181] (0xc00003aa50) (0xc0009d8500) Stream added, broadcasting: 1\nI0204 12:53:41.657610 1319 log.go:181] (0xc00003aa50) Reply frame received for 1\nI0204 12:53:41.657655 1319 log.go:181] (0xc00003aa50) (0xc0009d85a0) Create stream\nI0204 12:53:41.657673 1319 log.go:181] (0xc00003aa50) (0xc0009d85a0) Stream added, broadcasting: 3\nI0204 12:53:41.659243 1319 log.go:181] (0xc00003aa50) Reply frame received for 3\nI0204 12:53:41.659288 1319 log.go:181] (0xc00003aa50) (0xc000952dc0) Create stream\nI0204 12:53:41.659303 1319 log.go:181] (0xc00003aa50) (0xc000952dc0) Stream added, broadcasting: 5\nI0204 12:53:41.660001 1319 log.go:181] (0xc00003aa50) Reply frame received for 5\nI0204 12:53:41.710891 1319 log.go:181] (0xc00003aa50) Data frame received for 5\nI0204 12:53:41.710920 1319 log.go:181] (0xc000952dc0) (5) Data frame handling\nI0204 12:53:41.710946 1319 log.go:181] (0xc000952dc0) (5) Data frame sent\nI0204 12:53:41.710959 1319 log.go:181] (0xc00003aa50) Data frame received for 5\n+ nc -zv -t -w 2 10.96.89.63 80\nConnection to 10.96.89.63 80 port [tcp/http] succeeded!\nI0204 12:53:41.710978 1319 log.go:181] (0xc000952dc0) (5) Data frame handling\nI0204 12:53:41.711027 1319 log.go:181] (0xc00003aa50) Data frame received for 3\nI0204 12:53:41.711057 1319 log.go:181] (0xc0009d85a0) (3) Data frame handling\nI0204 12:53:41.712355 1319 log.go:181] (0xc00003aa50) Data frame received for 1\nI0204 12:53:41.712371 1319 log.go:181] (0xc0009d8500) (1) Data frame handling\nI0204 12:53:41.712382 1319 log.go:181] (0xc0009d8500) (1) Data frame sent\nI0204 12:53:41.712443 1319 log.go:181] (0xc00003aa50) (0xc0009d8500) Stream removed, broadcasting: 1\nI0204 12:53:41.712506 1319 log.go:181] (0xc00003aa50) Go away received\nI0204 12:53:41.712745 1319 log.go:181] (0xc00003aa50) (0xc0009d8500) Stream removed, broadcasting: 1\nI0204 12:53:41.712758 1319 log.go:181] (0xc00003aa50) (0xc0009d85a0) Stream removed, broadcasting: 3\nI0204 12:53:41.712770 1319 log.go:181] (0xc00003aa50) (0xc000952dc0) Stream removed, broadcasting: 5\n" Feb 4 12:53:41.716: INFO: stdout: "" Feb 4 12:53:41.716: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9649 exec execpod-affinitysmlgm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32055' Feb 4 12:53:42.001: INFO: stderr: "I0204 12:53:41.926719 1337 log.go:181] (0xc00003a420) (0xc0000cbe00) Create stream\nI0204 12:53:41.926907 1337 log.go:181] (0xc00003a420) (0xc0000cbe00) Stream added, broadcasting: 1\nI0204 12:53:41.928932 1337 log.go:181] (0xc00003a420) Reply frame received for 1\nI0204 12:53:41.928979 1337 log.go:181] (0xc00003a420) (0xc0007b43c0) Create stream\nI0204 12:53:41.928991 1337 log.go:181] (0xc00003a420) (0xc0007b43c0) Stream added, broadcasting: 3\nI0204 12:53:41.929652 1337 log.go:181] (0xc00003a420) Reply frame received for 3\nI0204 12:53:41.929678 1337 log.go:181] (0xc00003a420) (0xc000a768c0) Create stream\nI0204 12:53:41.929685 1337 log.go:181] (0xc00003a420) (0xc000a768c0) Stream added, broadcasting: 5\nI0204 12:53:41.930393 1337 log.go:181] (0xc00003a420) Reply frame received for 5\nI0204 12:53:41.994438 1337 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 12:53:41.994475 1337 log.go:181] (0xc000a768c0) (5) Data frame handling\nI0204 12:53:41.994488 1337 log.go:181] (0xc000a768c0) (5) Data frame sent\nI0204 12:53:41.994495 1337 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 12:53:41.994501 1337 log.go:181] (0xc000a768c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32055\nConnection to 172.18.0.14 32055 port [tcp/*] succeeded!\nI0204 12:53:41.994558 1337 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 12:53:41.994574 1337 log.go:181] (0xc0007b43c0) (3) Data frame handling\nI0204 12:53:41.995672 1337 log.go:181] (0xc00003a420) Data frame received for 1\nI0204 12:53:41.995713 1337 log.go:181] (0xc0000cbe00) (1) Data frame handling\nI0204 12:53:41.995735 1337 log.go:181] (0xc0000cbe00) (1) Data frame sent\nI0204 12:53:41.995755 1337 log.go:181] (0xc00003a420) (0xc0000cbe00) Stream removed, broadcasting: 1\nI0204 12:53:41.995781 1337 log.go:181] (0xc00003a420) Go away received\nI0204 12:53:41.996215 1337 log.go:181] (0xc00003a420) (0xc0000cbe00) Stream removed, broadcasting: 1\nI0204 12:53:41.996234 1337 log.go:181] (0xc00003a420) (0xc0007b43c0) Stream removed, broadcasting: 3\nI0204 12:53:41.996243 1337 log.go:181] (0xc00003a420) (0xc000a768c0) Stream removed, broadcasting: 5\n" Feb 4 12:53:42.001: INFO: stdout: "" Feb 4 12:53:42.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9649 exec execpod-affinitysmlgm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 32055' Feb 4 12:53:42.488: INFO: stderr: "I0204 12:53:42.429291 1350 log.go:181] (0xc000141080) (0xc000a203c0) Create stream\nI0204 12:53:42.429359 1350 log.go:181] (0xc000141080) (0xc000a203c0) Stream added, broadcasting: 1\nI0204 12:53:42.431266 1350 log.go:181] (0xc000141080) Reply frame received for 1\nI0204 12:53:42.431300 1350 log.go:181] (0xc000141080) (0xc000c9a000) Create stream\nI0204 12:53:42.431309 1350 log.go:181] (0xc000141080) (0xc000c9a000) Stream added, broadcasting: 3\nI0204 12:53:42.432094 1350 log.go:181] (0xc000141080) Reply frame received for 3\nI0204 12:53:42.432120 1350 log.go:181] (0xc000141080) (0xc000a20460) Create stream\nI0204 12:53:42.432128 1350 log.go:181] (0xc000141080) (0xc000a20460) Stream added, broadcasting: 5\nI0204 12:53:42.432779 1350 log.go:181] (0xc000141080) Reply frame received for 5\nI0204 12:53:42.482229 1350 log.go:181] (0xc000141080) Data frame received for 3\nI0204 12:53:42.482254 1350 log.go:181] (0xc000c9a000) (3) Data frame handling\nI0204 12:53:42.482287 1350 log.go:181] (0xc000141080) Data frame received for 5\nI0204 12:53:42.482308 1350 log.go:181] (0xc000a20460) (5) Data frame handling\nI0204 12:53:42.482325 1350 log.go:181] (0xc000a20460) (5) Data frame sent\nI0204 12:53:42.482336 1350 log.go:181] (0xc000141080) Data frame received for 5\nI0204 12:53:42.482346 1350 log.go:181] (0xc000a20460) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 32055\nConnection to 172.18.0.16 32055 port [tcp/*] succeeded!\nI0204 12:53:42.483193 1350 log.go:181] (0xc000141080) Data frame received for 1\nI0204 12:53:42.483218 1350 log.go:181] (0xc000a203c0) (1) Data frame handling\nI0204 12:53:42.483229 1350 log.go:181] (0xc000a203c0) (1) Data frame sent\nI0204 12:53:42.483237 1350 log.go:181] (0xc000141080) (0xc000a203c0) Stream removed, broadcasting: 1\nI0204 12:53:42.483249 1350 log.go:181] (0xc000141080) Go away received\nI0204 12:53:42.483571 1350 log.go:181] (0xc000141080) (0xc000a203c0) Stream removed, broadcasting: 1\nI0204 12:53:42.483591 1350 log.go:181] (0xc000141080) (0xc000c9a000) Stream removed, broadcasting: 3\nI0204 12:53:42.483605 1350 log.go:181] (0xc000141080) (0xc000a20460) Stream removed, broadcasting: 5\n" Feb 4 12:53:42.488: INFO: stdout: "" Feb 4 12:53:42.488: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9649 exec execpod-affinitysmlgm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32055/ ; done' Feb 4 12:53:43.563: INFO: stderr: "I0204 12:53:43.411710 1368 log.go:181] (0xc00003ac60) (0xc0009663c0) Create stream\nI0204 12:53:43.411788 1368 log.go:181] (0xc00003ac60) (0xc0009663c0) Stream added, broadcasting: 1\nI0204 12:53:43.413976 1368 log.go:181] (0xc00003ac60) Reply frame received for 1\nI0204 12:53:43.414013 1368 log.go:181] (0xc00003ac60) (0xc00092c000) Create stream\nI0204 12:53:43.414030 1368 log.go:181] (0xc00003ac60) (0xc00092c000) Stream added, broadcasting: 3\nI0204 12:53:43.414748 1368 log.go:181] (0xc00003ac60) Reply frame received for 3\nI0204 12:53:43.414795 1368 log.go:181] (0xc00003ac60) (0xc000936780) Create stream\nI0204 12:53:43.414828 1368 log.go:181] (0xc00003ac60) (0xc000936780) Stream added, broadcasting: 5\nI0204 12:53:43.415469 1368 log.go:181] (0xc00003ac60) Reply frame received for 5\nI0204 12:53:43.463319 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.463384 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.463420 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.463475 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.463509 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.463530 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.468195 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.468224 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.468251 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.469358 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.469373 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.469387 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.469392 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.469396 1368 log.go:181] (0xc000936780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.469419 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.473232 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.473244 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.473267 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.474345 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.474361 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.474371 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.474821 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.474842 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.474867 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.474885 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.474899 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.474907 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.480001 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.480013 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.480020 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.480945 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.480973 1368 log.go:181] (0xc000936780) (5) Data frame handling\n+ echo\n+ curl -q -sI0204 12:53:43.481057 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.481101 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.481125 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.481155 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.481173 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.481192 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.481216 1368 log.go:181] (0xc000936780) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.484053 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.484065 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.484072 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.484917 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.484941 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.484950 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.484969 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.484982 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.484993 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.485001 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.485007 1368 log.go:181] (0xc000936780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.485020 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.487962 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.487981 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.487998 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.489111 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.489149 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.489175 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.489207 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.489218 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.489237 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.492590 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.492601 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.492610 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.493552 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.493577 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.493597 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.493647 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.493672 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.493698 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.496551 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.496566 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.496575 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.497622 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.497641 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.497651 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.497667 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.497676 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.497703 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.500394 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.500412 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.500427 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.500763 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.500808 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.500827 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.500923 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.500935 1368 log.go:181] (0xc00092c000) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.500944 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.506127 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.506150 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.506165 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.506763 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.506774 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.506783 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.506791 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.506795 1368 log.go:181] (0xc000936780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.506807 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.506871 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.506899 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.506928 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.512059 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.512086 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.512114 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.513041 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.513075 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.513090 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.513109 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.513122 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.513137 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.517266 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.517292 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.517314 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.518337 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.518359 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.518370 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.518387 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.518396 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.518406 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.522115 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.522145 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.522163 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.522972 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.522989 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.522997 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.523024 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.523051 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.523080 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.528708 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.528728 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.528744 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.529883 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.529911 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.529919 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.529950 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.529984 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.530008 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.538072 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.538099 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.538111 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.538624 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.538641 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.538695 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.539358 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.539374 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.539384 1368 log.go:181] (0xc000936780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.553411 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.553543 1368 log.go:181] (0xc000936780) (5) Data frame handling\nI0204 12:53:43.553605 1368 log.go:181] (0xc000936780) (5) Data frame sent\nI0204 12:53:43.553663 1368 log.go:181] (0xc00003ac60) Data frame received for 5\nI0204 12:53:43.553731 1368 log.go:181] (0xc000936780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32055/\nI0204 12:53:43.553892 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.553954 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.554014 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.554070 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.554131 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.554191 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.554233 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.554306 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.554383 1368 log.go:181] (0xc00092c000) (3) Data frame sent\nI0204 12:53:43.554402 1368 log.go:181] (0xc00003ac60) Data frame received for 3\nI0204 12:53:43.554411 1368 log.go:181] (0xc00092c000) (3) Data frame handling\nI0204 12:53:43.556255 1368 log.go:181] (0xc00003ac60) Data frame received for 1\nI0204 12:53:43.556280 1368 log.go:181] (0xc0009663c0) (1) Data frame handling\nI0204 12:53:43.556293 1368 log.go:181] (0xc0009663c0) (1) Data frame sent\nI0204 12:53:43.556614 1368 log.go:181] (0xc00003ac60) (0xc0009663c0) Stream removed, broadcasting: 1\nI0204 12:53:43.556632 1368 log.go:181] (0xc00003ac60) Go away received\nI0204 12:53:43.557027 1368 log.go:181] (0xc00003ac60) (0xc0009663c0) Stream removed, broadcasting: 1\nI0204 12:53:43.557043 1368 log.go:181] (0xc00003ac60) (0xc00092c000) Stream removed, broadcasting: 3\nI0204 12:53:43.557049 1368 log.go:181] (0xc00003ac60) (0xc000936780) Stream removed, broadcasting: 5\n" Feb 4 12:53:43.564: INFO: stdout: "\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2\naffinity-nodeport-cs5l2" Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Received response from host: affinity-nodeport-cs5l2 Feb 4 12:53:43.564: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9649, will wait for the garbage collector to delete the pods Feb 4 12:53:44.961: INFO: Deleting ReplicationController affinity-nodeport took: 157.042595ms Feb 4 12:53:46.561: INFO: Terminating ReplicationController affinity-nodeport pods took: 1.600287769s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:54:41.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9649" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:89.513 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":67,"skipped":1229,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:54:41.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:54:41.229: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 4 12:54:41.266: INFO: Number of nodes with available pods: 0 Feb 4 12:54:41.266: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 4 12:54:41.328: INFO: Number of nodes with available pods: 0 Feb 4 12:54:41.328: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:42.376: INFO: Number of nodes with available pods: 0 Feb 4 12:54:42.376: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:43.332: INFO: Number of nodes with available pods: 0 Feb 4 12:54:43.332: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:44.372: INFO: Number of nodes with available pods: 0 Feb 4 12:54:44.372: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:45.473: INFO: Number of nodes with available pods: 1 Feb 4 12:54:45.473: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 4 12:54:45.646: INFO: Number of nodes with available pods: 1 Feb 4 12:54:45.646: INFO: Number of running nodes: 0, number of available pods: 1 Feb 4 12:54:46.653: INFO: Number of nodes with available pods: 0 Feb 4 12:54:46.653: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 4 12:54:46.912: INFO: Number of nodes with available pods: 0 Feb 4 12:54:46.912: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:47.917: INFO: Number of nodes with available pods: 0 Feb 4 12:54:47.917: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:48.914: INFO: Number of nodes with available pods: 0 Feb 4 12:54:48.914: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:49.915: INFO: Number of nodes with available pods: 0 Feb 4 12:54:49.915: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:50.914: INFO: Number of nodes with available pods: 0 Feb 4 12:54:50.914: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:51.917: INFO: Number of nodes with available pods: 0 Feb 4 12:54:51.917: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:52.916: INFO: Number of nodes with available pods: 0 Feb 4 12:54:52.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:53.915: INFO: Number of nodes with available pods: 0 Feb 4 12:54:53.915: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:54.963: INFO: Number of nodes with available pods: 0 Feb 4 12:54:54.963: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:55.916: INFO: Number of nodes with available pods: 0 Feb 4 12:54:55.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:56.963: INFO: Number of nodes with available pods: 0 Feb 4 12:54:56.963: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:58.068: INFO: Number of nodes with available pods: 0 Feb 4 12:54:58.068: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:54:59.114: INFO: Number of nodes with available pods: 0 Feb 4 12:54:59.114: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:00.048: INFO: Number of nodes with available pods: 0 Feb 4 12:55:00.048: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:01.248: INFO: Number of nodes with available pods: 0 Feb 4 12:55:01.248: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:01.985: INFO: Number of nodes with available pods: 0 Feb 4 12:55:01.985: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:03.305: INFO: Number of nodes with available pods: 0 Feb 4 12:55:03.305: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:04.137: INFO: Number of nodes with available pods: 0 Feb 4 12:55:04.137: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:04.949: INFO: Number of nodes with available pods: 0 Feb 4 12:55:04.949: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:06.031: INFO: Number of nodes with available pods: 0 Feb 4 12:55:06.031: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:06.937: INFO: Number of nodes with available pods: 0 Feb 4 12:55:06.938: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:08.185: INFO: Number of nodes with available pods: 0 Feb 4 12:55:08.185: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:09.035: INFO: Number of nodes with available pods: 0 Feb 4 12:55:09.035: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:09.936: INFO: Number of nodes with available pods: 0 Feb 4 12:55:09.936: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:10.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:10.917: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:11.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:11.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:13.186: INFO: Number of nodes with available pods: 0 Feb 4 12:55:13.186: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:13.927: INFO: Number of nodes with available pods: 0 Feb 4 12:55:13.927: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:14.917: INFO: Number of nodes with available pods: 0 Feb 4 12:55:14.917: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:15.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:15.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:16.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:16.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:17.933: INFO: Number of nodes with available pods: 0 Feb 4 12:55:17.933: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:18.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:18.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:19.924: INFO: Number of nodes with available pods: 0 Feb 4 12:55:19.924: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:20.983: INFO: Number of nodes with available pods: 0 Feb 4 12:55:20.983: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:21.928: INFO: Number of nodes with available pods: 0 Feb 4 12:55:21.928: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:22.915: INFO: Number of nodes with available pods: 0 Feb 4 12:55:22.915: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:23.925: INFO: Number of nodes with available pods: 0 Feb 4 12:55:23.925: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:25.077: INFO: Number of nodes with available pods: 0 Feb 4 12:55:25.077: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:26.048: INFO: Number of nodes with available pods: 0 Feb 4 12:55:26.048: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:26.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:26.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:27.949: INFO: Number of nodes with available pods: 0 Feb 4 12:55:27.949: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:28.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:28.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:29.968: INFO: Number of nodes with available pods: 0 Feb 4 12:55:29.968: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:30.915: INFO: Number of nodes with available pods: 0 Feb 4 12:55:30.915: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:31.941: INFO: Number of nodes with available pods: 0 Feb 4 12:55:31.941: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:32.915: INFO: Number of nodes with available pods: 0 Feb 4 12:55:32.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:33.916: INFO: Number of nodes with available pods: 0 Feb 4 12:55:33.916: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:34.917: INFO: Number of nodes with available pods: 0 Feb 4 12:55:34.917: INFO: Node latest-worker is running more than one daemon pod Feb 4 12:55:35.916: INFO: Number of nodes with available pods: 1 Feb 4 12:55:35.916: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5130, will wait for the garbage collector to delete the pods Feb 4 12:55:35.983: INFO: Deleting DaemonSet.extensions daemon-set took: 7.723698ms Feb 4 12:55:36.583: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.306163ms Feb 4 12:56:31.287: INFO: Number of nodes with available pods: 0 Feb 4 12:56:31.287: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 12:56:31.290: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"2078639"},"items":null} Feb 4 12:56:31.293: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2078639"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:56:31.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5130" for this suite. • [SLOW TEST:110.270 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":311,"completed":68,"skipped":1235,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:56:31.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-upd-6672d9bc-1c34-4696-9eaa-ea706a6e0aa9 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-6672d9bc-1c34-4696-9eaa-ea706a6e0aa9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:56:37.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1545" for this suite. • [SLOW TEST:6.294 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":69,"skipped":1245,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:56:37.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-secret-vv4l STEP: Creating a pod to test atomic-volume-subpath Feb 4 12:56:37.811: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vv4l" in namespace "subpath-4044" to be "Succeeded or Failed" Feb 4 12:56:37.827: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Pending", Reason="", readiness=false. Elapsed: 15.376924ms Feb 4 12:56:39.831: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019135253s Feb 4 12:56:41.835: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 4.023207106s Feb 4 12:56:44.007: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 6.195138636s Feb 4 12:56:46.186: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 8.374482895s Feb 4 12:56:48.189: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 10.377859757s Feb 4 12:56:50.194: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 12.382516419s Feb 4 12:56:52.199: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 14.387634346s Feb 4 12:56:54.204: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 16.392698631s Feb 4 12:56:56.208: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 18.396772243s Feb 4 12:56:58.225: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 20.413595317s Feb 4 12:57:00.229: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Running", Reason="", readiness=true. Elapsed: 22.417920354s Feb 4 12:57:02.235: INFO: Pod "pod-subpath-test-secret-vv4l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.423271272s STEP: Saw pod success Feb 4 12:57:02.235: INFO: Pod "pod-subpath-test-secret-vv4l" satisfied condition "Succeeded or Failed" Feb 4 12:57:02.238: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-vv4l container test-container-subpath-secret-vv4l: STEP: delete the pod Feb 4 12:57:02.276: INFO: Waiting for pod pod-subpath-test-secret-vv4l to disappear Feb 4 12:57:02.283: INFO: Pod pod-subpath-test-secret-vv4l no longer exists STEP: Deleting pod pod-subpath-test-secret-vv4l Feb 4 12:57:02.283: INFO: Deleting pod "pod-subpath-test-secret-vv4l" in namespace "subpath-4044" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:57:02.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4044" for this suite. • [SLOW TEST:24.664 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":311,"completed":70,"skipped":1245,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:57:02.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-downwardapi-wtxn STEP: Creating a pod to test atomic-volume-subpath Feb 4 12:57:02.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wtxn" in namespace "subpath-4544" to be "Succeeded or Failed" Feb 4 12:57:02.458: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Pending", Reason="", readiness=false. Elapsed: 13.556709ms Feb 4 12:57:04.462: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017436217s Feb 4 12:57:06.466: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 4.021944169s Feb 4 12:57:08.470: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 6.026034251s Feb 4 12:57:10.475: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 8.030307205s Feb 4 12:57:12.479: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 10.034628938s Feb 4 12:57:14.564: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 12.119526094s Feb 4 12:57:16.569: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 14.125065293s Feb 4 12:57:19.055: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 16.611111816s Feb 4 12:57:21.379: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 18.934583152s Feb 4 12:57:23.426: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 20.981557468s Feb 4 12:57:25.431: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Running", Reason="", readiness=true. Elapsed: 22.986727626s Feb 4 12:57:27.519: INFO: Pod "pod-subpath-test-downwardapi-wtxn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.074831261s STEP: Saw pod success Feb 4 12:57:27.519: INFO: Pod "pod-subpath-test-downwardapi-wtxn" satisfied condition "Succeeded or Failed" Feb 4 12:57:27.522: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-wtxn container test-container-subpath-downwardapi-wtxn: STEP: delete the pod Feb 4 12:57:28.045: INFO: Waiting for pod pod-subpath-test-downwardapi-wtxn to disappear Feb 4 12:57:28.088: INFO: Pod pod-subpath-test-downwardapi-wtxn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-wtxn Feb 4 12:57:28.088: INFO: Deleting pod "pod-subpath-test-downwardapi-wtxn" in namespace "subpath-4544" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:57:28.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4544" for this suite. • [SLOW TEST:25.773 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":311,"completed":71,"skipped":1263,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:57:28.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 4 12:57:34.130: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:57:34.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5773" for this suite. • [SLOW TEST:6.755 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":72,"skipped":1272,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:57:34.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-20327257-c9d9-46e0-aa12-e9c3e29673bd STEP: Creating a pod to test consume secrets Feb 4 12:57:36.523: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec" in namespace "projected-6958" to be "Succeeded or Failed" Feb 4 12:57:36.750: INFO: Pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec": Phase="Pending", Reason="", readiness=false. Elapsed: 225.983536ms Feb 4 12:57:39.404: INFO: Pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.880056673s Feb 4 12:57:42.103: INFO: Pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec": Phase="Pending", Reason="", readiness=false. Elapsed: 5.5795674s Feb 4 12:57:44.316: INFO: Pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec": Phase="Pending", Reason="", readiness=false. Elapsed: 7.792796308s Feb 4 12:57:46.462: INFO: Pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec": Phase="Pending", Reason="", readiness=false. Elapsed: 9.938747166s Feb 4 12:57:48.618: INFO: Pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.094018441s STEP: Saw pod success Feb 4 12:57:48.618: INFO: Pod "pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec" satisfied condition "Succeeded or Failed" Feb 4 12:57:48.707: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec container projected-secret-volume-test: STEP: delete the pod Feb 4 12:57:48.946: INFO: Waiting for pod pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec to disappear Feb 4 12:57:48.965: INFO: Pod pod-projected-secrets-4de1829a-524e-45c1-bc48-7793d73deeec no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:57:48.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6958" for this suite. • [SLOW TEST:14.219 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":73,"skipped":1282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:57:49.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0204 12:58:00.482463 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 4 12:59:02.521: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:59:02.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-159" for this suite. • [SLOW TEST:73.457 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":311,"completed":74,"skipped":1312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:59:02.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 4 12:59:02.610: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 12:59:02.647: INFO: Waiting for terminating namespaces to be deleted... Feb 4 12:59:02.650: INFO: Logging pods the apiserver thinks is on node latest-worker before test Feb 4 12:59:02.656: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container chaos-mesh ready: true, restart count 0 Feb 4 12:59:02.656: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 12:59:02.656: INFO: coredns-74ff55c5b-l5h56 from kube-system started at 2021-02-04 12:57:39 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container coredns ready: true, restart count 0 Feb 4 12:59:02.656: INFO: coredns-74ff55c5b-tkk2f from kube-system started at 2021-02-04 12:57:40 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container coredns ready: true, restart count 0 Feb 4 12:59:02.656: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 12:59:02.656: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 12:59:02.656: INFO: netserver-0 from nettest-7769 started at 2021-02-04 12:58:23 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container webserver ready: true, restart count 0 Feb 4 12:59:02.656: INFO: test-container-pod from nettest-7769 started at 2021-02-04 12:58:41 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container webserver ready: true, restart count 0 Feb 4 12:59:02.656: INFO: netserver-0 from nettest-9892 started at 2021-02-04 12:59:01 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.656: INFO: Container webserver ready: false, restart count 0 Feb 4 12:59:02.656: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Feb 4 12:59:02.663: INFO: csi-mockplugin-0 from csi-mock-volumes-1979-1805 started at 2021-02-04 12:58:00 +0000 UTC (3 container statuses recorded) Feb 4 12:59:02.663: INFO: Container csi-provisioner ready: true, restart count 0 Feb 4 12:59:02.663: INFO: Container driver-registrar ready: true, restart count 0 Feb 4 12:59:02.663: INFO: Container mock ready: true, restart count 0 Feb 4 12:59:02.663: INFO: csi-mockplugin-attacher-0 from csi-mock-volumes-1979-1805 started at 2021-02-04 12:58:00 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container csi-attacher ready: true, restart count 0 Feb 4 12:59:02.663: INFO: csi-mockplugin-resizer-0 from csi-mock-volumes-1979-1805 started at 2021-02-04 12:58:00 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container csi-resizer ready: true, restart count 0 Feb 4 12:59:02.663: INFO: pvc-volume-tester-qj8hw from csi-mock-volumes-1979 started at 2021-02-04 12:58:11 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container volume-tester ready: true, restart count 0 Feb 4 12:59:02.663: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 12:59:02.663: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 12:59:02.663: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 12:59:02.663: INFO: netserver-1 from nettest-7769 started at 2021-02-04 12:58:23 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container webserver ready: true, restart count 0 Feb 4 12:59:02.663: INFO: netserver-1 from nettest-9892 started at 2021-02-04 12:59:01 +0000 UTC (1 container statuses recorded) Feb 4 12:59:02.663: INFO: Container webserver ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16608cdbe5f9c660], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:59:03.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4310" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":311,"completed":75,"skipped":1344,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:59:03.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:59:22.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-368" for this suite. STEP: Destroying namespace "nsdeletetest-9567" for this suite. Feb 4 12:59:22.882: INFO: Namespace nsdeletetest-9567 was already deleted STEP: Destroying namespace "nsdeletetest-2789" for this suite. • [SLOW TEST:19.253 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":311,"completed":76,"skipped":1354,"failed":0} [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:59:22.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 12:59:23.597: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 4 12:59:28.600: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 4 12:59:28.600: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 4 12:59:28.698: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5769 eadb4f41-7b18-41b7-a5cd-8ff8748f5618 2080241 1 2021-02-04 12:59:28 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2021-02-04 12:59:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f5b5d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 4 12:59:28.713: INFO: New ReplicaSet "test-cleanup-deployment-874dc686f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-874dc686f deployment-5769 ba552b27-67d1-4d92-8ca7-6ab73c5fe09a 2080243 1 2021-02-04 12:59:28 +0000 UTC map[name:cleanup-pod pod-template-hash:874dc686f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment eadb4f41-7b18-41b7-a5cd-8ff8748f5618 0xc003f5ba50 0xc003f5ba51}] [] [{kube-controller-manager Update apps/v1 2021-02-04 12:59:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eadb4f41-7b18-41b7-a5cd-8ff8748f5618\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 874dc686f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:874dc686f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f5bac8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 4 12:59:28.713: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 4 12:59:28.713: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5769 67beca4c-9bd1-463a-a652-2b9c8a2632c0 2080242 1 2021-02-04 12:59:23 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment eadb4f41-7b18-41b7-a5cd-8ff8748f5618 0xc003f5b90f 0xc003f5b940}] [] [{e2e.test Update apps/v1 2021-02-04 12:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-04 12:59:28 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"eadb4f41-7b18-41b7-a5cd-8ff8748f5618\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003f5b9d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 4 12:59:28.735: INFO: Pod "test-cleanup-controller-95pqf" is available: &Pod{ObjectMeta:{test-cleanup-controller-95pqf test-cleanup-controller- deployment-5769 d29171d7-a387-4c1e-915a-f96ee9ebbfc7 2080212 0 2021-02-04 12:59:23 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 67beca4c-9bd1-463a-a652-2b9c8a2632c0 0xc003f5bec7 0xc003f5bec8}] [] [{kube-controller-manager Update v1 2021-02-04 12:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67beca4c-9bd1-463a-a652-2b9c8a2632c0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 12:59:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.117\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bn8nt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bn8nt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bn8nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:59:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:59:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:59:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 12:59:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.117,StartTime:2021-02-04 12:59:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 12:59:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6f45799cfb6538052a973840b95609f65ebfbd56005759ba99bb215b7d76b157,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 12:59:28.735: INFO: Pod "test-cleanup-deployment-874dc686f-z8z2h" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-874dc686f-z8z2h test-cleanup-deployment-874dc686f- deployment-5769 507f1415-5ecf-448b-92b6-93dd2b324468 2080245 0 2021-02-04 12:59:28 +0000 UTC map[name:cleanup-pod pod-template-hash:874dc686f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-874dc686f ba552b27-67d1-4d92-8ca7-6ab73c5fe09a 0xc00102c0a0 0xc00102c0a1}] [] [{kube-controller-manager Update v1 2021-02-04 12:59:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba552b27-67d1-4d92-8ca7-6ab73c5fe09a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bn8nt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bn8nt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bn8nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:59:28.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5769" for this suite. • [SLOW TEST:5.932 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":311,"completed":77,"skipped":1354,"failed":0} [k8s.io] Lease lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:59:28.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:59:29.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7390" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":311,"completed":78,"skipped":1354,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:59:29.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 4 12:59:29.246: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 4 12:59:42.732: INFO: >>> kubeConfig: /root/.kube/config Feb 4 12:59:46.376: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:59:58.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4632" for this suite. • [SLOW TEST:29.715 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":311,"completed":79,"skipped":1368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:59:58.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 12:59:59.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-858" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":311,"completed":80,"skipped":1391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 12:59:59.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7531 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating stateful set ss in namespace statefulset-7531 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7531 Feb 4 12:59:59.750: INFO: Found 0 stateful pods, waiting for 1 Feb 4 13:00:09.756: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 4 13:00:09.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:00:16.181: INFO: stderr: "I0204 13:00:16.031708 1386 log.go:181] (0xc0006ce000) (0xc00093a6e0) Create stream\nI0204 13:00:16.031770 1386 log.go:181] (0xc0006ce000) (0xc00093a6e0) Stream added, broadcasting: 1\nI0204 13:00:16.034703 1386 log.go:181] (0xc0006ce000) Reply frame received for 1\nI0204 13:00:16.034739 1386 log.go:181] (0xc0006ce000) (0xc00093abe0) Create stream\nI0204 13:00:16.034747 1386 log.go:181] (0xc0006ce000) (0xc00093abe0) Stream added, broadcasting: 3\nI0204 13:00:16.035714 1386 log.go:181] (0xc0006ce000) Reply frame received for 3\nI0204 13:00:16.035753 1386 log.go:181] (0xc0006ce000) (0xc00058e1e0) Create stream\nI0204 13:00:16.035764 1386 log.go:181] (0xc0006ce000) (0xc00058e1e0) Stream added, broadcasting: 5\nI0204 13:00:16.036704 1386 log.go:181] (0xc0006ce000) Reply frame received for 5\nI0204 13:00:16.121260 1386 log.go:181] (0xc0006ce000) Data frame received for 5\nI0204 13:00:16.121298 1386 log.go:181] (0xc00058e1e0) (5) Data frame handling\nI0204 13:00:16.121324 1386 log.go:181] (0xc00058e1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:00:16.173074 1386 log.go:181] (0xc0006ce000) Data frame received for 3\nI0204 13:00:16.173123 1386 log.go:181] (0xc00093abe0) (3) Data frame handling\nI0204 13:00:16.173156 1386 log.go:181] (0xc00093abe0) (3) Data frame sent\nI0204 13:00:16.173174 1386 log.go:181] (0xc0006ce000) Data frame received for 3\nI0204 13:00:16.173188 1386 log.go:181] (0xc00093abe0) (3) Data frame handling\nI0204 13:00:16.173371 1386 log.go:181] (0xc0006ce000) Data frame received for 5\nI0204 13:00:16.173392 1386 log.go:181] (0xc00058e1e0) (5) Data frame handling\nI0204 13:00:16.175422 1386 log.go:181] (0xc0006ce000) Data frame received for 1\nI0204 13:00:16.175444 1386 log.go:181] (0xc00093a6e0) (1) Data frame handling\nI0204 13:00:16.175464 1386 log.go:181] (0xc00093a6e0) (1) Data frame sent\nI0204 13:00:16.175482 1386 log.go:181] (0xc0006ce000) (0xc00093a6e0) Stream removed, broadcasting: 1\nI0204 13:00:16.175801 1386 log.go:181] (0xc0006ce000) (0xc00093a6e0) Stream removed, broadcasting: 1\nI0204 13:00:16.175825 1386 log.go:181] (0xc0006ce000) (0xc00093abe0) Stream removed, broadcasting: 3\nI0204 13:00:16.175980 1386 log.go:181] (0xc0006ce000) Go away received\nI0204 13:00:16.176012 1386 log.go:181] (0xc0006ce000) (0xc00058e1e0) Stream removed, broadcasting: 5\n" Feb 4 13:00:16.181: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:00:16.181: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:00:16.223: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 4 13:00:26.228: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:00:26.228: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:00:26.248: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:26.248: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:26.249: INFO: Feb 4 13:00:26.249: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 4 13:00:27.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989960114s Feb 4 13:00:29.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983662778s Feb 4 13:00:30.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.222703033s Feb 4 13:00:31.028: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.216754796s Feb 4 13:00:32.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.210818514s Feb 4 13:00:33.040: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.204523625s Feb 4 13:00:34.046: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.198957159s Feb 4 13:00:35.052: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.192900646s Feb 4 13:00:36.058: INFO: Verifying statefulset ss doesn't scale past 3 for another 186.725729ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7531 Feb 4 13:00:37.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:00:37.344: INFO: stderr: "I0204 13:00:37.245862 1405 log.go:181] (0xc000d8a420) (0xc000d82640) Create stream\nI0204 13:00:37.245908 1405 log.go:181] (0xc000d8a420) (0xc000d82640) Stream added, broadcasting: 1\nI0204 13:00:37.247588 1405 log.go:181] (0xc000d8a420) Reply frame received for 1\nI0204 13:00:37.247625 1405 log.go:181] (0xc000d8a420) (0xc000d14000) Create stream\nI0204 13:00:37.247636 1405 log.go:181] (0xc000d8a420) (0xc000d14000) Stream added, broadcasting: 3\nI0204 13:00:37.248290 1405 log.go:181] (0xc000d8a420) Reply frame received for 3\nI0204 13:00:37.248322 1405 log.go:181] (0xc000d8a420) (0xc0008f8140) Create stream\nI0204 13:00:37.248332 1405 log.go:181] (0xc000d8a420) (0xc0008f8140) Stream added, broadcasting: 5\nI0204 13:00:37.249159 1405 log.go:181] (0xc000d8a420) Reply frame received for 5\nI0204 13:00:37.332767 1405 log.go:181] (0xc000d8a420) Data frame received for 3\nI0204 13:00:37.332826 1405 log.go:181] (0xc000d14000) (3) Data frame handling\nI0204 13:00:37.332930 1405 log.go:181] (0xc000d14000) (3) Data frame sent\nI0204 13:00:37.332958 1405 log.go:181] (0xc000d8a420) Data frame received for 5\nI0204 13:00:37.332970 1405 log.go:181] (0xc0008f8140) (5) Data frame handling\nI0204 13:00:37.332980 1405 log.go:181] (0xc0008f8140) (5) Data frame sent\nI0204 13:00:37.332989 1405 log.go:181] (0xc000d8a420) Data frame received for 5\nI0204 13:00:37.332998 1405 log.go:181] (0xc0008f8140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0204 13:00:37.333019 1405 log.go:181] (0xc000d8a420) Data frame received for 3\nI0204 13:00:37.333041 1405 log.go:181] (0xc000d14000) (3) Data frame handling\nI0204 13:00:37.337974 1405 log.go:181] (0xc000d8a420) Data frame received for 1\nI0204 13:00:37.338004 1405 log.go:181] (0xc000d82640) (1) Data frame handling\nI0204 13:00:37.338040 1405 log.go:181] (0xc000d82640) (1) Data frame sent\nI0204 13:00:37.338063 1405 log.go:181] (0xc000d8a420) (0xc000d82640) Stream removed, broadcasting: 1\nI0204 13:00:37.338085 1405 log.go:181] (0xc000d8a420) Go away received\nI0204 13:00:37.338449 1405 log.go:181] (0xc000d8a420) (0xc000d82640) Stream removed, broadcasting: 1\nI0204 13:00:37.338473 1405 log.go:181] (0xc000d8a420) (0xc000d14000) Stream removed, broadcasting: 3\nI0204 13:00:37.338484 1405 log.go:181] (0xc000d8a420) (0xc0008f8140) Stream removed, broadcasting: 5\n" Feb 4 13:00:37.344: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 13:00:37.344: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 13:00:37.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:00:37.626: INFO: stderr: "I0204 13:00:37.569155 1423 log.go:181] (0xc0007f02c0) (0xc00083a000) Create stream\nI0204 13:00:37.569222 1423 log.go:181] (0xc0007f02c0) (0xc00083a000) Stream added, broadcasting: 1\nI0204 13:00:37.570550 1423 log.go:181] (0xc0007f02c0) Reply frame received for 1\nI0204 13:00:37.570581 1423 log.go:181] (0xc0007f02c0) (0xc00083a0a0) Create stream\nI0204 13:00:37.570593 1423 log.go:181] (0xc0007f02c0) (0xc00083a0a0) Stream added, broadcasting: 3\nI0204 13:00:37.571188 1423 log.go:181] (0xc0007f02c0) Reply frame received for 3\nI0204 13:00:37.571232 1423 log.go:181] (0xc0007f02c0) (0xc0007a9c20) Create stream\nI0204 13:00:37.571248 1423 log.go:181] (0xc0007f02c0) (0xc0007a9c20) Stream added, broadcasting: 5\nI0204 13:00:37.571955 1423 log.go:181] (0xc0007f02c0) Reply frame received for 5\nI0204 13:00:37.617332 1423 log.go:181] (0xc0007f02c0) Data frame received for 3\nI0204 13:00:37.617384 1423 log.go:181] (0xc00083a0a0) (3) Data frame handling\nI0204 13:00:37.617397 1423 log.go:181] (0xc00083a0a0) (3) Data frame sent\nI0204 13:00:37.617407 1423 log.go:181] (0xc0007f02c0) Data frame received for 3\nI0204 13:00:37.617415 1423 log.go:181] (0xc00083a0a0) (3) Data frame handling\nI0204 13:00:37.617456 1423 log.go:181] (0xc0007f02c0) Data frame received for 5\nI0204 13:00:37.617490 1423 log.go:181] (0xc0007a9c20) (5) Data frame handling\nI0204 13:00:37.617509 1423 log.go:181] (0xc0007a9c20) (5) Data frame sent\nI0204 13:00:37.617520 1423 log.go:181] (0xc0007f02c0) Data frame received for 5\nI0204 13:00:37.617526 1423 log.go:181] (0xc0007a9c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0204 13:00:37.618860 1423 log.go:181] (0xc0007f02c0) Data frame received for 1\nI0204 13:00:37.618879 1423 log.go:181] (0xc00083a000) (1) Data frame handling\nI0204 13:00:37.618891 1423 log.go:181] (0xc00083a000) (1) Data frame sent\nI0204 13:00:37.618903 1423 log.go:181] (0xc0007f02c0) (0xc00083a000) Stream removed, broadcasting: 1\nI0204 13:00:37.618955 1423 log.go:181] (0xc0007f02c0) Go away received\nI0204 13:00:37.619249 1423 log.go:181] (0xc0007f02c0) (0xc00083a000) Stream removed, broadcasting: 1\nI0204 13:00:37.619271 1423 log.go:181] (0xc0007f02c0) (0xc00083a0a0) Stream removed, broadcasting: 3\nI0204 13:00:37.619285 1423 log.go:181] (0xc0007f02c0) (0xc0007a9c20) Stream removed, broadcasting: 5\n" Feb 4 13:00:37.626: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 13:00:37.626: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 13:00:37.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:00:37.824: INFO: stderr: "I0204 13:00:37.754131 1441 log.go:181] (0xc000b42c60) (0xc000cad400) Create stream\nI0204 13:00:37.754183 1441 log.go:181] (0xc000b42c60) (0xc000cad400) Stream added, broadcasting: 1\nI0204 13:00:37.755983 1441 log.go:181] (0xc000b42c60) Reply frame received for 1\nI0204 13:00:37.756030 1441 log.go:181] (0xc000b42c60) (0xc000cadd60) Create stream\nI0204 13:00:37.756045 1441 log.go:181] (0xc000b42c60) (0xc000cadd60) Stream added, broadcasting: 3\nI0204 13:00:37.757224 1441 log.go:181] (0xc000b42c60) Reply frame received for 3\nI0204 13:00:37.757291 1441 log.go:181] (0xc000b42c60) (0xc00064a0a0) Create stream\nI0204 13:00:37.757313 1441 log.go:181] (0xc000b42c60) (0xc00064a0a0) Stream added, broadcasting: 5\nI0204 13:00:37.758373 1441 log.go:181] (0xc000b42c60) Reply frame received for 5\nI0204 13:00:37.816334 1441 log.go:181] (0xc000b42c60) Data frame received for 3\nI0204 13:00:37.816379 1441 log.go:181] (0xc000cadd60) (3) Data frame handling\nI0204 13:00:37.816397 1441 log.go:181] (0xc000cadd60) (3) Data frame sent\nI0204 13:00:37.816408 1441 log.go:181] (0xc000b42c60) Data frame received for 3\nI0204 13:00:37.816417 1441 log.go:181] (0xc000cadd60) (3) Data frame handling\nI0204 13:00:37.816468 1441 log.go:181] (0xc000b42c60) Data frame received for 5\nI0204 13:00:37.816571 1441 log.go:181] (0xc00064a0a0) (5) Data frame handling\nI0204 13:00:37.816592 1441 log.go:181] (0xc00064a0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0204 13:00:37.816606 1441 log.go:181] (0xc000b42c60) Data frame received for 5\nI0204 13:00:37.816631 1441 log.go:181] (0xc00064a0a0) (5) Data frame handling\nI0204 13:00:37.818044 1441 log.go:181] (0xc000b42c60) Data frame received for 1\nI0204 13:00:37.818062 1441 log.go:181] (0xc000cad400) (1) Data frame handling\nI0204 13:00:37.818080 1441 log.go:181] (0xc000cad400) (1) Data frame sent\nI0204 13:00:37.818101 1441 log.go:181] (0xc000b42c60) (0xc000cad400) Stream removed, broadcasting: 1\nI0204 13:00:37.818322 1441 log.go:181] (0xc000b42c60) Go away received\nI0204 13:00:37.818457 1441 log.go:181] (0xc000b42c60) (0xc000cad400) Stream removed, broadcasting: 1\nI0204 13:00:37.818480 1441 log.go:181] (0xc000b42c60) (0xc000cadd60) Stream removed, broadcasting: 3\nI0204 13:00:37.818492 1441 log.go:181] (0xc000b42c60) (0xc00064a0a0) Stream removed, broadcasting: 5\n" Feb 4 13:00:37.824: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 13:00:37.824: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 13:00:37.828: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:00:37.828: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:00:37.828: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 4 13:00:37.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:00:38.064: INFO: stderr: "I0204 13:00:37.973159 1459 log.go:181] (0xc00003a420) (0xc0003badc0) Create stream\nI0204 13:00:37.973231 1459 log.go:181] (0xc00003a420) (0xc0003badc0) Stream added, broadcasting: 1\nI0204 13:00:37.975142 1459 log.go:181] (0xc00003a420) Reply frame received for 1\nI0204 13:00:37.975204 1459 log.go:181] (0xc00003a420) (0xc00049e780) Create stream\nI0204 13:00:37.975232 1459 log.go:181] (0xc00003a420) (0xc00049e780) Stream added, broadcasting: 3\nI0204 13:00:37.976253 1459 log.go:181] (0xc00003a420) Reply frame received for 3\nI0204 13:00:37.976294 1459 log.go:181] (0xc00003a420) (0xc000b1e1e0) Create stream\nI0204 13:00:37.976308 1459 log.go:181] (0xc00003a420) (0xc000b1e1e0) Stream added, broadcasting: 5\nI0204 13:00:37.977272 1459 log.go:181] (0xc00003a420) Reply frame received for 5\nI0204 13:00:38.056209 1459 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 13:00:38.056244 1459 log.go:181] (0xc000b1e1e0) (5) Data frame handling\nI0204 13:00:38.056260 1459 log.go:181] (0xc000b1e1e0) (5) Data frame sent\nI0204 13:00:38.056272 1459 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 13:00:38.056283 1459 log.go:181] (0xc000b1e1e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:00:38.056356 1459 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 13:00:38.056404 1459 log.go:181] (0xc00049e780) (3) Data frame handling\nI0204 13:00:38.056423 1459 log.go:181] (0xc00049e780) (3) Data frame sent\nI0204 13:00:38.056436 1459 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 13:00:38.056450 1459 log.go:181] (0xc00049e780) (3) Data frame handling\nI0204 13:00:38.058160 1459 log.go:181] (0xc00003a420) Data frame received for 1\nI0204 13:00:38.058189 1459 log.go:181] (0xc0003badc0) (1) Data frame handling\nI0204 13:00:38.058222 1459 log.go:181] (0xc0003badc0) (1) Data frame sent\nI0204 13:00:38.058248 1459 log.go:181] (0xc00003a420) (0xc0003badc0) Stream removed, broadcasting: 1\nI0204 13:00:38.058282 1459 log.go:181] (0xc00003a420) Go away received\nI0204 13:00:38.058736 1459 log.go:181] (0xc00003a420) (0xc0003badc0) Stream removed, broadcasting: 1\nI0204 13:00:38.058757 1459 log.go:181] (0xc00003a420) (0xc00049e780) Stream removed, broadcasting: 3\nI0204 13:00:38.058771 1459 log.go:181] (0xc00003a420) (0xc000b1e1e0) Stream removed, broadcasting: 5\n" Feb 4 13:00:38.065: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:00:38.065: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:00:38.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:00:38.324: INFO: stderr: "I0204 13:00:38.195820 1477 log.go:181] (0xc0002a0420) (0xc000428dc0) Create stream\nI0204 13:00:38.195885 1477 log.go:181] (0xc0002a0420) (0xc000428dc0) Stream added, broadcasting: 1\nI0204 13:00:38.197750 1477 log.go:181] (0xc0002a0420) Reply frame received for 1\nI0204 13:00:38.197794 1477 log.go:181] (0xc0002a0420) (0xc000429540) Create stream\nI0204 13:00:38.197804 1477 log.go:181] (0xc0002a0420) (0xc000429540) Stream added, broadcasting: 3\nI0204 13:00:38.198812 1477 log.go:181] (0xc0002a0420) Reply frame received for 3\nI0204 13:00:38.198877 1477 log.go:181] (0xc0002a0420) (0xc0006a4780) Create stream\nI0204 13:00:38.198920 1477 log.go:181] (0xc0002a0420) (0xc0006a4780) Stream added, broadcasting: 5\nI0204 13:00:38.199774 1477 log.go:181] (0xc0002a0420) Reply frame received for 5\nI0204 13:00:38.285834 1477 log.go:181] (0xc0002a0420) Data frame received for 5\nI0204 13:00:38.285858 1477 log.go:181] (0xc0006a4780) (5) Data frame handling\nI0204 13:00:38.285870 1477 log.go:181] (0xc0006a4780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:00:38.314033 1477 log.go:181] (0xc0002a0420) Data frame received for 3\nI0204 13:00:38.314067 1477 log.go:181] (0xc000429540) (3) Data frame handling\nI0204 13:00:38.314089 1477 log.go:181] (0xc000429540) (3) Data frame sent\nI0204 13:00:38.314192 1477 log.go:181] (0xc0002a0420) Data frame received for 3\nI0204 13:00:38.314325 1477 log.go:181] (0xc000429540) (3) Data frame handling\nI0204 13:00:38.314438 1477 log.go:181] (0xc0002a0420) Data frame received for 5\nI0204 13:00:38.314458 1477 log.go:181] (0xc0006a4780) (5) Data frame handling\nI0204 13:00:38.316181 1477 log.go:181] (0xc0002a0420) Data frame received for 1\nI0204 13:00:38.316208 1477 log.go:181] (0xc000428dc0) (1) Data frame handling\nI0204 13:00:38.316223 1477 log.go:181] (0xc000428dc0) (1) Data frame sent\nI0204 13:00:38.316240 1477 log.go:181] (0xc0002a0420) (0xc000428dc0) Stream removed, broadcasting: 1\nI0204 13:00:38.316259 1477 log.go:181] (0xc0002a0420) Go away received\nI0204 13:00:38.316984 1477 log.go:181] (0xc0002a0420) (0xc000428dc0) Stream removed, broadcasting: 1\nI0204 13:00:38.317008 1477 log.go:181] (0xc0002a0420) (0xc000429540) Stream removed, broadcasting: 3\nI0204 13:00:38.317020 1477 log.go:181] (0xc0002a0420) (0xc0006a4780) Stream removed, broadcasting: 5\n" Feb 4 13:00:38.324: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:00:38.324: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:00:38.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:00:38.670: INFO: stderr: "I0204 13:00:38.540071 1495 log.go:181] (0xc000d86160) (0xc000d82280) Create stream\nI0204 13:00:38.540149 1495 log.go:181] (0xc000d86160) (0xc000d82280) Stream added, broadcasting: 1\nI0204 13:00:38.542523 1495 log.go:181] (0xc000d86160) Reply frame received for 1\nI0204 13:00:38.542555 1495 log.go:181] (0xc000d86160) (0xc000d82320) Create stream\nI0204 13:00:38.542564 1495 log.go:181] (0xc000d86160) (0xc000d82320) Stream added, broadcasting: 3\nI0204 13:00:38.543690 1495 log.go:181] (0xc000d86160) Reply frame received for 3\nI0204 13:00:38.543729 1495 log.go:181] (0xc000d86160) (0xc000cca000) Create stream\nI0204 13:00:38.543742 1495 log.go:181] (0xc000d86160) (0xc000cca000) Stream added, broadcasting: 5\nI0204 13:00:38.544700 1495 log.go:181] (0xc000d86160) Reply frame received for 5\nI0204 13:00:38.613018 1495 log.go:181] (0xc000d86160) Data frame received for 5\nI0204 13:00:38.613057 1495 log.go:181] (0xc000cca000) (5) Data frame handling\nI0204 13:00:38.613073 1495 log.go:181] (0xc000cca000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:00:38.661837 1495 log.go:181] (0xc000d86160) Data frame received for 3\nI0204 13:00:38.661871 1495 log.go:181] (0xc000d82320) (3) Data frame handling\nI0204 13:00:38.661895 1495 log.go:181] (0xc000d82320) (3) Data frame sent\nI0204 13:00:38.662048 1495 log.go:181] (0xc000d86160) Data frame received for 5\nI0204 13:00:38.662067 1495 log.go:181] (0xc000cca000) (5) Data frame handling\nI0204 13:00:38.662231 1495 log.go:181] (0xc000d86160) Data frame received for 3\nI0204 13:00:38.662259 1495 log.go:181] (0xc000d82320) (3) Data frame handling\nI0204 13:00:38.664073 1495 log.go:181] (0xc000d86160) Data frame received for 1\nI0204 13:00:38.664094 1495 log.go:181] (0xc000d82280) (1) Data frame handling\nI0204 13:00:38.664111 1495 log.go:181] (0xc000d82280) (1) Data frame sent\nI0204 13:00:38.664239 1495 log.go:181] (0xc000d86160) (0xc000d82280) Stream removed, broadcasting: 1\nI0204 13:00:38.664289 1495 log.go:181] (0xc000d86160) Go away received\nI0204 13:00:38.664722 1495 log.go:181] (0xc000d86160) (0xc000d82280) Stream removed, broadcasting: 1\nI0204 13:00:38.664753 1495 log.go:181] (0xc000d86160) (0xc000d82320) Stream removed, broadcasting: 3\nI0204 13:00:38.664771 1495 log.go:181] (0xc000d86160) (0xc000cca000) Stream removed, broadcasting: 5\n" Feb 4 13:00:38.670: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:00:38.670: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:00:38.670: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:00:38.684: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 4 13:00:48.693: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:00:48.693: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:00:48.693: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:00:48.710: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:48.710: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:48.710: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:48.710: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:48.710: INFO: Feb 4 13:00:48.710: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:49.819: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:49.819: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:49.819: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:49.819: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:49.819: INFO: Feb 4 13:00:49.819: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:50.968: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:50.968: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:50.968: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:50.968: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:50.968: INFO: Feb 4 13:00:50.968: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:51.972: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:51.972: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:51.972: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:51.972: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:51.972: INFO: Feb 4 13:00:51.972: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:52.978: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:52.978: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:52.978: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:52.978: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:52.978: INFO: Feb 4 13:00:52.978: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:53.983: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:53.983: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:53.983: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:53.983: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:53.983: INFO: Feb 4 13:00:53.983: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:54.989: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:54.989: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:54.989: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:54.989: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:54.989: INFO: Feb 4 13:00:54.989: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:55.998: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:55.998: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:55.998: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:55.998: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:55.998: INFO: Feb 4 13:00:55.999: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:57.004: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:57.004: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:57.004: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:57.004: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:57.004: INFO: Feb 4 13:00:57.004: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 4 13:00:58.063: INFO: POD NODE PHASE GRACE CONDITIONS Feb 4 13:00:58.063: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 12:59:59 +0000 UTC }] Feb 4 13:00:58.063: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:58.063: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 13:00:26 +0000 UTC }] Feb 4 13:00:58.063: INFO: Feb 4 13:00:58.063: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7531 Feb 4 13:00:59.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:00:59.204: INFO: rc: 1 Feb 4 13:00:59.204: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 4 13:01:09.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:01:09.352: INFO: rc: 1 Feb 4 13:01:09.352: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 4 13:01:19.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:01:19.495: INFO: rc: 1 Feb 4 13:01:19.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 4 13:01:29.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:01:30.319: INFO: rc: 1 Feb 4 13:01:30.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 4 13:01:40.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:01:40.474: INFO: rc: 1 Feb 4 13:01:40.474: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 4 13:01:50.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:01:50.567: INFO: rc: 1 Feb 4 13:01:50.567: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:02:00.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:02:00.671: INFO: rc: 1 Feb 4 13:02:00.671: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:02:10.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:02:11.096: INFO: rc: 1 Feb 4 13:02:11.096: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:02:21.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:02:21.205: INFO: rc: 1 Feb 4 13:02:21.205: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:02:31.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:02:31.319: INFO: rc: 1 Feb 4 13:02:31.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:02:41.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:02:41.437: INFO: rc: 1 Feb 4 13:02:41.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:02:51.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:02:51.539: INFO: rc: 1 Feb 4 13:02:51.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:03:01.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:03:01.629: INFO: rc: 1 Feb 4 13:03:01.629: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:03:11.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:03:11.731: INFO: rc: 1 Feb 4 13:03:11.731: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:03:21.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:03:21.842: INFO: rc: 1 Feb 4 13:03:21.842: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:03:31.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:03:31.941: INFO: rc: 1 Feb 4 13:03:31.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:03:41.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:03:42.410: INFO: rc: 1 Feb 4 13:03:42.410: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:03:52.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:03:52.502: INFO: rc: 1 Feb 4 13:03:52.502: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:04:02.502: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:04:03.039: INFO: rc: 1 Feb 4 13:04:03.039: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:04:13.039: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:04:13.139: INFO: rc: 1 Feb 4 13:04:13.139: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:04:23.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:04:23.241: INFO: rc: 1 Feb 4 13:04:23.241: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:04:33.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:04:33.385: INFO: rc: 1 Feb 4 13:04:33.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:04:43.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:04:43.496: INFO: rc: 1 Feb 4 13:04:43.496: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:04:53.496: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:04:53.824: INFO: rc: 1 Feb 4 13:04:53.825: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:05:03.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:05:03.929: INFO: rc: 1 Feb 4 13:05:03.929: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:05:13.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:05:14.034: INFO: rc: 1 Feb 4 13:05:14.034: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:05:24.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:05:24.139: INFO: rc: 1 Feb 4 13:05:24.139: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:05:34.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:05:34.253: INFO: rc: 1 Feb 4 13:05:34.253: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:05:44.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:05:44.351: INFO: rc: 1 Feb 4 13:05:44.351: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:05:54.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:05:54.445: INFO: rc: 1 Feb 4 13:05:54.445: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 4 13:06:04.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-7531 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:06:04.548: INFO: rc: 1 Feb 4 13:06:04.548: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Feb 4 13:06:04.548: INFO: Scaling statefulset ss to 0 Feb 4 13:06:04.557: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 4 13:06:04.559: INFO: Deleting all statefulset in ns statefulset-7531 Feb 4 13:06:04.579: INFO: Scaling statefulset ss to 0 Feb 4 13:06:04.589: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:06:04.592: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:06:04.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7531" for this suite. • [SLOW TEST:365.296 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":311,"completed":81,"skipped":1421,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:06:04.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 13:06:05.345: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 13:06:07.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:06:09.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748040765, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 13:06:12.511: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:06:24.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6931" for this suite. STEP: Destroying namespace "webhook-6931-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.235 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":311,"completed":82,"skipped":1432,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:06:24.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Feb 4 13:06:25.249: INFO: Waiting up to 5m0s for pod "downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb" in namespace "downward-api-3996" to be "Succeeded or Failed" Feb 4 13:06:25.274: INFO: Pod "downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb": Phase="Pending", Reason="", readiness=false. Elapsed: 25.39547ms Feb 4 13:06:27.330: INFO: Pod "downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08155524s Feb 4 13:06:29.335: INFO: Pod "downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086177135s STEP: Saw pod success Feb 4 13:06:29.335: INFO: Pod "downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb" satisfied condition "Succeeded or Failed" Feb 4 13:06:29.338: INFO: Trying to get logs from node latest-worker2 pod downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb container dapi-container: STEP: delete the pod Feb 4 13:06:29.503: INFO: Waiting for pod downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb to disappear Feb 4 13:06:29.616: INFO: Pod downward-api-c08b3c3b-97d9-4f92-8cd3-a7f9d21382bb no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:06:29.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3996" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":311,"completed":83,"skipped":1439,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:06:29.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 4 13:06:29.690: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 13:06:29.736: INFO: Waiting for terminating namespaces to be deleted... Feb 4 13:06:29.739: INFO: Logging pods the apiserver thinks is on node latest-worker before test Feb 4 13:06:29.745: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.745: INFO: Container chaos-mesh ready: true, restart count 0 Feb 4 13:06:29.745: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.745: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 13:06:29.745: INFO: coredns-74ff55c5b-l5h56 from kube-system started at 2021-02-04 12:57:39 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.745: INFO: Container coredns ready: true, restart count 0 Feb 4 13:06:29.745: INFO: coredns-74ff55c5b-tkk2f from kube-system started at 2021-02-04 12:57:40 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.745: INFO: Container coredns ready: true, restart count 0 Feb 4 13:06:29.745: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.745: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 13:06:29.746: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.746: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 13:06:29.746: INFO: hostexec-latest-worker-dsvrz from persistent-local-volumes-test-2604 started at 2021-02-04 13:06:16 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.746: INFO: Container agnhost-container ready: true, restart count 0 Feb 4 13:06:29.746: INFO: pod-135f50e2-3518-4fb4-bc45-79544ad56dc0 from persistent-local-volumes-test-2604 started at 2021-02-04 13:06:28 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.746: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:06:29.746: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Feb 4 13:06:29.751: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.751: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 13:06:29.751: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.751: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 13:06:29.751: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.751: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 13:06:29.751: INFO: netserver-1 from nettest-8643 started at 2021-02-04 13:04:46 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.751: INFO: Container webserver ready: true, restart count 0 Feb 4 13:06:29.751: INFO: test-container-pod from nettest-8643 started at 2021-02-04 13:05:06 +0000 UTC (1 container statuses recorded) Feb 4 13:06:29.751: INFO: Container webserver ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-23b34ff4-4dab-46ef-a633-6082c2ed79a3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-23b34ff4-4dab-46ef-a633-6082c2ed79a3 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-23b34ff4-4dab-46ef-a633-6082c2ed79a3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:06:41.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6966" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:12.339 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":311,"completed":84,"skipped":1441,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:06:41.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-map-fabae945-6fb6-475c-8c2c-943cab84cbc8 STEP: Creating a pod to test consume configMaps Feb 4 13:06:42.898: INFO: Waiting up to 5m0s for pod "pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74" in namespace "configmap-4468" to be "Succeeded or Failed" Feb 4 13:06:43.102: INFO: Pod "pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74": Phase="Pending", Reason="", readiness=false. Elapsed: 203.86558ms Feb 4 13:06:45.554: INFO: Pod "pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.65603413s Feb 4 13:06:47.802: INFO: Pod "pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.903868783s Feb 4 13:06:50.384: INFO: Pod "pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74": Phase="Running", Reason="", readiness=true. Elapsed: 7.486267727s Feb 4 13:06:52.695: INFO: Pod "pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.797318184s STEP: Saw pod success Feb 4 13:06:52.695: INFO: Pod "pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74" satisfied condition "Succeeded or Failed" Feb 4 13:06:52.757: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74 container agnhost-container: STEP: delete the pod Feb 4 13:06:52.894: INFO: Waiting for pod pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74 to disappear Feb 4 13:06:52.905: INFO: Pod pod-configmaps-d182ea4b-8951-4e3d-a094-6bfa7acccc74 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:06:52.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4468" for this suite. • [SLOW TEST:11.003 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":85,"skipped":1446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:06:52.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-map-57d5e3f9-5053-4e3c-b2c9-da88d47bd943 STEP: Creating a pod to test consume secrets Feb 4 13:06:53.278: INFO: Waiting up to 5m0s for pod "pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83" in namespace "secrets-4772" to be "Succeeded or Failed" Feb 4 13:06:53.332: INFO: Pod "pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83": Phase="Pending", Reason="", readiness=false. Elapsed: 54.004138ms Feb 4 13:06:55.446: INFO: Pod "pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168247699s Feb 4 13:06:57.450: INFO: Pod "pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17198332s Feb 4 13:06:59.474: INFO: Pod "pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.196007583s STEP: Saw pod success Feb 4 13:06:59.474: INFO: Pod "pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83" satisfied condition "Succeeded or Failed" Feb 4 13:06:59.635: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83 container secret-volume-test: STEP: delete the pod Feb 4 13:07:00.164: INFO: Waiting for pod pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83 to disappear Feb 4 13:07:00.208: INFO: Pod pod-secrets-14a4bf25-8fe9-40f2-85d4-aaadc7247a83 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:07:00.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4772" for this suite. • [SLOW TEST:7.292 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":86,"skipped":1494,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:07:00.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:07:09.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6996" for this suite. STEP: Destroying namespace "nsdeletetest-9803" for this suite. Feb 4 13:07:09.235: INFO: Namespace nsdeletetest-9803 was already deleted STEP: Destroying namespace "nsdeletetest-2617" for this suite. • [SLOW TEST:8.980 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":311,"completed":87,"skipped":1497,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:07:09.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7430 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7430 STEP: creating replication controller externalsvc in namespace services-7430 I0204 13:07:09.669184 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7430, replica count: 2 I0204 13:07:12.719606 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:07:15.719919 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Feb 4 13:07:15.854: INFO: Creating new exec pod Feb 4 13:07:19.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-7430 exec execpodv52g8 -- /bin/sh -x -c nslookup nodeport-service.services-7430.svc.cluster.local' Feb 4 13:07:20.129: INFO: stderr: "I0204 13:07:20.025996 2060 log.go:181] (0xc0004de000) (0xc000b1a0a0) Create stream\nI0204 13:07:20.026098 2060 log.go:181] (0xc0004de000) (0xc000b1a0a0) Stream added, broadcasting: 1\nI0204 13:07:20.028238 2060 log.go:181] (0xc0004de000) Reply frame received for 1\nI0204 13:07:20.028292 2060 log.go:181] (0xc0004de000) (0xc000b1a6e0) Create stream\nI0204 13:07:20.028307 2060 log.go:181] (0xc0004de000) (0xc000b1a6e0) Stream added, broadcasting: 3\nI0204 13:07:20.029583 2060 log.go:181] (0xc0004de000) Reply frame received for 3\nI0204 13:07:20.029648 2060 log.go:181] (0xc0004de000) (0xc000ca43c0) Create stream\nI0204 13:07:20.029665 2060 log.go:181] (0xc0004de000) (0xc000ca43c0) Stream added, broadcasting: 5\nI0204 13:07:20.030541 2060 log.go:181] (0xc0004de000) Reply frame received for 5\nI0204 13:07:20.110236 2060 log.go:181] (0xc0004de000) Data frame received for 5\nI0204 13:07:20.110268 2060 log.go:181] (0xc000ca43c0) (5) Data frame handling\nI0204 13:07:20.110289 2060 log.go:181] (0xc000ca43c0) (5) Data frame sent\n+ nslookup nodeport-service.services-7430.svc.cluster.local\nI0204 13:07:20.118735 2060 log.go:181] (0xc0004de000) Data frame received for 3\nI0204 13:07:20.118765 2060 log.go:181] (0xc000b1a6e0) (3) Data frame handling\nI0204 13:07:20.118791 2060 log.go:181] (0xc000b1a6e0) (3) Data frame sent\nI0204 13:07:20.119933 2060 log.go:181] (0xc0004de000) Data frame received for 3\nI0204 13:07:20.119966 2060 log.go:181] (0xc000b1a6e0) (3) Data frame handling\nI0204 13:07:20.120000 2060 log.go:181] (0xc000b1a6e0) (3) Data frame sent\nI0204 13:07:20.120293 2060 log.go:181] (0xc0004de000) Data frame received for 3\nI0204 13:07:20.120312 2060 log.go:181] (0xc000b1a6e0) (3) Data frame handling\nI0204 13:07:20.120515 2060 log.go:181] (0xc0004de000) Data frame received for 5\nI0204 13:07:20.120598 2060 log.go:181] (0xc000ca43c0) (5) Data frame handling\nI0204 13:07:20.122561 2060 log.go:181] (0xc0004de000) Data frame received for 1\nI0204 13:07:20.122594 2060 log.go:181] (0xc000b1a0a0) (1) Data frame handling\nI0204 13:07:20.122614 2060 log.go:181] (0xc000b1a0a0) (1) Data frame sent\nI0204 13:07:20.122638 2060 log.go:181] (0xc0004de000) (0xc000b1a0a0) Stream removed, broadcasting: 1\nI0204 13:07:20.123009 2060 log.go:181] (0xc0004de000) (0xc000b1a0a0) Stream removed, broadcasting: 1\nI0204 13:07:20.123029 2060 log.go:181] (0xc0004de000) (0xc000b1a6e0) Stream removed, broadcasting: 3\nI0204 13:07:20.123040 2060 log.go:181] (0xc0004de000) (0xc000ca43c0) Stream removed, broadcasting: 5\n" Feb 4 13:07:20.129: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7430.svc.cluster.local\tcanonical name = externalsvc.services-7430.svc.cluster.local.\nName:\texternalsvc.services-7430.svc.cluster.local\nAddress: 10.96.66.104\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7430, will wait for the garbage collector to delete the pods Feb 4 13:07:20.190: INFO: Deleting ReplicationController externalsvc took: 8.382531ms Feb 4 13:07:20.291: INFO: Terminating ReplicationController externalsvc pods took: 100.253895ms Feb 4 13:07:41.221: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:07:41.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7430" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:32.112 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":311,"completed":88,"skipped":1503,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:07:41.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name secret-emptykey-test-3f8762fc-2e56-40b0-8f5c-572e83289ee6 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:07:41.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6402" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":311,"completed":89,"skipped":1513,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:07:41.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating pod Feb 4 13:07:45.752: INFO: Pod pod-hostip-f4064466-3f31-459f-8439-7fd598ec88b3 has hostIP: 172.18.0.16 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:07:45.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5861" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":311,"completed":90,"skipped":1529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:07:45.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 4 13:07:45.890: INFO: Waiting up to 1m0s for all nodes to be ready Feb 4 13:08:45.916: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:08:45.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:08:46.065: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Feb 4 13:08:46.069: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:08:46.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7837" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:08:46.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6714" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:60.491 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673 verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":311,"completed":91,"skipped":1572,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:08:46.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 13:08:46.355: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c" in namespace "downward-api-711" to be "Succeeded or Failed" Feb 4 13:08:46.377: INFO: Pod "downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.167748ms Feb 4 13:08:48.382: INFO: Pod "downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027164744s Feb 4 13:08:50.387: INFO: Pod "downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031874474s STEP: Saw pod success Feb 4 13:08:50.387: INFO: Pod "downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c" satisfied condition "Succeeded or Failed" Feb 4 13:08:50.393: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c container client-container: STEP: delete the pod Feb 4 13:08:50.463: INFO: Waiting for pod downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c to disappear Feb 4 13:08:50.472: INFO: Pod downwardapi-volume-86e1e0dc-3ff0-4b97-b606-45abe7e2fd8c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:08:50.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-711" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":311,"completed":92,"skipped":1593,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:08:50.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Feb 4 13:08:50.564: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:08:59.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3981" for this suite. • [SLOW TEST:8.655 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":311,"completed":93,"skipped":1593,"failed":0} SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:08:59.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod busybox-cbf0a592-a2d1-434b-b442-53ef79043100 in namespace container-probe-7918 Feb 4 13:09:03.244: INFO: Started pod busybox-cbf0a592-a2d1-434b-b442-53ef79043100 in namespace container-probe-7918 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 13:09:03.247: INFO: Initial restart count of pod busybox-cbf0a592-a2d1-434b-b442-53ef79043100 is 0 Feb 4 13:09:54.798: INFO: Restart count of pod container-probe-7918/busybox-cbf0a592-a2d1-434b-b442-53ef79043100 is now 1 (51.551354743s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:09:54.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7918" for this suite. • [SLOW TEST:55.884 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":311,"completed":94,"skipped":1595,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:09:55.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:09:55.397: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-49625b57-210b-40d4-93db-92aef540e5bb" in namespace "security-context-test-2566" to be "Succeeded or Failed" Feb 4 13:09:55.600: INFO: Pod "alpine-nnp-false-49625b57-210b-40d4-93db-92aef540e5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 203.418063ms Feb 4 13:09:57.963: INFO: Pod "alpine-nnp-false-49625b57-210b-40d4-93db-92aef540e5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.565956275s Feb 4 13:10:00.468: INFO: Pod "alpine-nnp-false-49625b57-210b-40d4-93db-92aef540e5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.071217409s Feb 4 13:10:03.313: INFO: Pod "alpine-nnp-false-49625b57-210b-40d4-93db-92aef540e5bb": Phase="Running", Reason="", readiness=true. Elapsed: 7.916107364s Feb 4 13:10:05.316: INFO: Pod "alpine-nnp-false-49625b57-210b-40d4-93db-92aef540e5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.918678967s Feb 4 13:10:05.316: INFO: Pod "alpine-nnp-false-49625b57-210b-40d4-93db-92aef540e5bb" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:10:05.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2566" for this suite. • [SLOW TEST:10.310 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":95,"skipped":1602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:10:05.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:10:15.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3081" for this suite. • [SLOW TEST:10.214 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":311,"completed":96,"skipped":1634,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:10:15.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:10:15.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Feb 4 13:10:16.413: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-04T13:10:16Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-04T13:10:16Z]] name:name1 resourceVersion:2084765 uid:97eb94ad-eff6-4a57-99cb-b56583b1d381] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Feb 4 13:10:26.475: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-04T13:10:26Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-04T13:10:26Z]] name:name2 resourceVersion:2084801 uid:38f8e06b-8568-407b-99aa-19469125973c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Feb 4 13:10:36.493: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-04T13:10:16Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-04T13:10:36Z]] name:name1 resourceVersion:2084823 uid:97eb94ad-eff6-4a57-99cb-b56583b1d381] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Feb 4 13:10:46.503: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-04T13:10:26Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-04T13:10:46Z]] name:name2 resourceVersion:2084846 uid:38f8e06b-8568-407b-99aa-19469125973c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Feb 4 13:10:56.515: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-04T13:10:16Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-04T13:10:36Z]] name:name1 resourceVersion:2084867 uid:97eb94ad-eff6-4a57-99cb-b56583b1d381] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Feb 4 13:11:06.528: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2021-02-04T13:10:26Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2021-02-04T13:10:46Z]] name:name2 resourceVersion:2084888 uid:38f8e06b-8568-407b-99aa-19469125973c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:11:17.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2708" for this suite. • [SLOW TEST:61.507 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":311,"completed":97,"skipped":1642,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:11:17.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:11:45.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4604" for this suite. • [SLOW TEST:28.281 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":311,"completed":98,"skipped":1643,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:11:45.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:11:58.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6265" for this suite. • [SLOW TEST:13.378 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":311,"completed":99,"skipped":1659,"failed":0} S ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:11:58.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Feb 4 13:11:59.306: INFO: starting watch STEP: patching STEP: updating Feb 4 13:11:59.336: INFO: waiting for watch events with expected annotations Feb 4 13:11:59.336: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:11:59.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-3473" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":311,"completed":100,"skipped":1660,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:11:59.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 4 13:11:59.708: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7449 606fbd9e-3bc2-4e6d-9369-3653a61c3289 2085164 0 2021-02-04 13:11:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-04 13:11:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:11:59.709: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7449 606fbd9e-3bc2-4e6d-9369-3653a61c3289 2085165 0 2021-02-04 13:11:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-04 13:11:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 4 13:11:59.739: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7449 606fbd9e-3bc2-4e6d-9369-3653a61c3289 2085166 0 2021-02-04 13:11:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-04 13:11:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:11:59.739: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7449 606fbd9e-3bc2-4e6d-9369-3653a61c3289 2085167 0 2021-02-04 13:11:59 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2021-02-04 13:11:59 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:11:59.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7449" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":311,"completed":101,"skipped":1661,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:11:59.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:12:16.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-675" for this suite. • [SLOW TEST:16.909 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":311,"completed":102,"skipped":1679,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:12:16.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9152, will wait for the garbage collector to delete the pods Feb 4 13:12:22.840: INFO: Deleting Job.batch foo took: 7.284902ms Feb 4 13:12:23.441: INFO: Terminating Job.batch foo pods took: 600.374555ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:14:00.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9152" for this suite. • [SLOW TEST:104.212 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":311,"completed":103,"skipped":1685,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:14:00.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-map-17f4f0b0-1ca8-48a1-82e4-a6b724133b63 STEP: Creating a pod to test consume configMaps Feb 4 13:14:00.959: INFO: Waiting up to 5m0s for pod "pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48" in namespace "configmap-2687" to be "Succeeded or Failed" Feb 4 13:14:00.962: INFO: Pod "pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.977142ms Feb 4 13:14:03.249: INFO: Pod "pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289915702s Feb 4 13:14:05.254: INFO: Pod "pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.29471469s STEP: Saw pod success Feb 4 13:14:05.254: INFO: Pod "pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48" satisfied condition "Succeeded or Failed" Feb 4 13:14:05.257: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48 container agnhost-container: STEP: delete the pod Feb 4 13:14:05.316: INFO: Waiting for pod pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48 to disappear Feb 4 13:14:05.348: INFO: Pod pod-configmaps-41af1d18-a33d-435f-b175-7ec8a224ac48 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:14:05.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2687" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":104,"skipped":1695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:14:05.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 4 13:14:05.531: INFO: Waiting up to 5m0s for pod "pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc" in namespace "emptydir-6494" to be "Succeeded or Failed" Feb 4 13:14:05.534: INFO: Pod "pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075909ms Feb 4 13:14:07.538: INFO: Pod "pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006709452s Feb 4 13:14:09.542: INFO: Pod "pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010931044s STEP: Saw pod success Feb 4 13:14:09.542: INFO: Pod "pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc" satisfied condition "Succeeded or Failed" Feb 4 13:14:09.545: INFO: Trying to get logs from node latest-worker2 pod pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc container test-container: STEP: delete the pod Feb 4 13:14:09.575: INFO: Waiting for pod pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc to disappear Feb 4 13:14:09.593: INFO: Pod pod-fd9e211b-46e0-49a6-9069-024ec1b0bbcc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:14:09.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6494" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":105,"skipped":1744,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:14:09.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:14:09.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1215 create -f -' Feb 4 13:14:13.366: INFO: stderr: "" Feb 4 13:14:13.366: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Feb 4 13:14:13.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1215 create -f -' Feb 4 13:14:13.721: INFO: stderr: "" Feb 4 13:14:13.721: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Feb 4 13:14:14.732: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:14:14.732: INFO: Found 0 / 1 Feb 4 13:14:15.726: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:14:15.726: INFO: Found 0 / 1 Feb 4 13:14:16.733: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:14:16.733: INFO: Found 0 / 1 Feb 4 13:14:17.725: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:14:17.725: INFO: Found 1 / 1 Feb 4 13:14:17.725: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 4 13:14:17.728: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:14:17.729: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 4 13:14:17.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1215 describe pod agnhost-primary-c7l2b' Feb 4 13:14:17.839: INFO: stderr: "" Feb 4 13:14:17.839: INFO: stdout: "Name: agnhost-primary-c7l2b\nNamespace: kubectl-1215\nPriority: 0\nNode: latest-worker2/172.18.0.16\nStart Time: Thu, 04 Feb 2021 13:14:13 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.1.4\nIPs:\n IP: 10.244.1.4\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://b1874d6c065a1c006ab99f3f94dd6996ff6881d879c23ede5b64b7f848097f1c\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.26\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 04 Feb 2021 13:14:16 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4547d (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4547d:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4547d\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-1215/agnhost-primary-c7l2b to latest-worker2\n Normal Pulled 3s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.26\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Feb 4 13:14:17.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1215 describe rc agnhost-primary' Feb 4 13:14:17.950: INFO: stderr: "" Feb 4 13:14:17.950: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1215\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.26\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-primary-c7l2b\n" Feb 4 13:14:17.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1215 describe service agnhost-primary' Feb 4 13:14:18.049: INFO: stderr: "" Feb 4 13:14:18.049: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-1215\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: \nIP: 10.96.82.107\nIPs: 10.96.82.107\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.4:6379\nSession Affinity: None\nEvents: \n" Feb 4 13:14:18.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1215 describe node latest-control-plane' Feb 4 13:14:18.176: INFO: stderr: "" Feb 4 13:14:18.176: INFO: stdout: "Name: latest-control-plane\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 26 Jan 2021 08:08:11 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 04 Feb 2021 13:14:08 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 04 Feb 2021 13:10:40 +0000 Tue, 26 Jan 2021 08:08:07 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 04 Feb 2021 13:10:40 +0000 Tue, 26 Jan 2021 08:08:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 04 Feb 2021 13:10:40 +0000 Tue, 26 Jan 2021 08:08:07 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 04 Feb 2021 13:10:40 +0000 Tue, 26 Jan 2021 08:08:53 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.15\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 5453bb7ec42e49739c7b6d2228bc8f1f\n System UUID: db0e74ae-46ee-4a01-9695-58430d8d48f2\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.21.0-alpha.0\n Kube-Proxy Version: v1.21.0-alpha.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nProviderID: kind://docker/latest/latest-control-plane\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 100m (0%) 0 (0%) 100Mi (0%) 0 (0%) 9d\n kube-system kindnet-xpx2l 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 9d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-proxy-vgxkf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n local-path-storage local-path-provisioner-8b46957d4-j852z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 100Mi (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Feb 4 13:14:18.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1215 describe namespace kubectl-1215' Feb 4 13:14:18.280: INFO: stderr: "" Feb 4 13:14:18.280: INFO: stdout: "Name: kubectl-1215\nLabels: e2e-framework=kubectl\n e2e-run=566162d3-aff6-46f4-8931-4930f825c480\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:14:18.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1215" for this suite. • [SLOW TEST:8.688 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1090 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":311,"completed":106,"skipped":1759,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:14:18.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-fc0ed47e-98e9-42ac-bd99-655cab2fb155 STEP: Creating a pod to test consume secrets Feb 4 13:14:18.429: INFO: Waiting up to 5m0s for pod "pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765" in namespace "secrets-6802" to be "Succeeded or Failed" Feb 4 13:14:18.445: INFO: Pod "pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765": Phase="Pending", Reason="", readiness=false. Elapsed: 15.69163ms Feb 4 13:14:20.449: INFO: Pod "pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019745917s Feb 4 13:14:22.453: INFO: Pod "pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02410911s STEP: Saw pod success Feb 4 13:14:22.453: INFO: Pod "pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765" satisfied condition "Succeeded or Failed" Feb 4 13:14:22.456: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765 container secret-env-test: STEP: delete the pod Feb 4 13:14:22.483: INFO: Waiting for pod pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765 to disappear Feb 4 13:14:22.487: INFO: Pod pod-secrets-bf10c884-2e4e-4545-acad-51d72aee3765 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:14:22.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6802" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":311,"completed":107,"skipped":1762,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:14:22.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Feb 4 13:14:22.603: INFO: Waiting up to 5m0s for pod "downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633" in namespace "downward-api-7803" to be "Succeeded or Failed" Feb 4 13:14:22.607: INFO: Pod "downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633": Phase="Pending", Reason="", readiness=false. Elapsed: 3.844963ms Feb 4 13:14:24.612: INFO: Pod "downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00897697s Feb 4 13:14:26.618: INFO: Pod "downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014314675s STEP: Saw pod success Feb 4 13:14:26.618: INFO: Pod "downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633" satisfied condition "Succeeded or Failed" Feb 4 13:14:26.621: INFO: Trying to get logs from node latest-worker2 pod downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633 container dapi-container: STEP: delete the pod Feb 4 13:14:26.674: INFO: Waiting for pod downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633 to disappear Feb 4 13:14:26.687: INFO: Pod downward-api-aacce64d-4f04-4d73-b3d5-b0cec88ad633 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:14:26.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7803" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":311,"completed":108,"skipped":1766,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:14:26.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8351 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8351 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8351 Feb 4 13:14:26.863: INFO: Found 0 stateful pods, waiting for 1 Feb 4 13:14:36.866: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 4 13:14:36.869: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:14:37.125: INFO: stderr: "I0204 13:14:37.009873 2201 log.go:181] (0xc00001c420) (0xc000154000) Create stream\nI0204 13:14:37.009955 2201 log.go:181] (0xc00001c420) (0xc000154000) Stream added, broadcasting: 1\nI0204 13:14:37.011976 2201 log.go:181] (0xc00001c420) Reply frame received for 1\nI0204 13:14:37.012027 2201 log.go:181] (0xc00001c420) (0xc0009a2500) Create stream\nI0204 13:14:37.012041 2201 log.go:181] (0xc00001c420) (0xc0009a2500) Stream added, broadcasting: 3\nI0204 13:14:37.013327 2201 log.go:181] (0xc00001c420) Reply frame received for 3\nI0204 13:14:37.013361 2201 log.go:181] (0xc00001c420) (0xc0005c4000) Create stream\nI0204 13:14:37.013375 2201 log.go:181] (0xc00001c420) (0xc0005c4000) Stream added, broadcasting: 5\nI0204 13:14:37.014464 2201 log.go:181] (0xc00001c420) Reply frame received for 5\nI0204 13:14:37.082826 2201 log.go:181] (0xc00001c420) Data frame received for 5\nI0204 13:14:37.082857 2201 log.go:181] (0xc0005c4000) (5) Data frame handling\nI0204 13:14:37.082877 2201 log.go:181] (0xc0005c4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:14:37.112603 2201 log.go:181] (0xc00001c420) Data frame received for 5\nI0204 13:14:37.112640 2201 log.go:181] (0xc0005c4000) (5) Data frame handling\nI0204 13:14:37.112681 2201 log.go:181] (0xc00001c420) Data frame received for 3\nI0204 13:14:37.112704 2201 log.go:181] (0xc0009a2500) (3) Data frame handling\nI0204 13:14:37.112729 2201 log.go:181] (0xc0009a2500) (3) Data frame sent\nI0204 13:14:37.112742 2201 log.go:181] (0xc00001c420) Data frame received for 3\nI0204 13:14:37.112753 2201 log.go:181] (0xc0009a2500) (3) Data frame handling\nI0204 13:14:37.114613 2201 log.go:181] (0xc00001c420) Data frame received for 1\nI0204 13:14:37.114640 2201 log.go:181] (0xc000154000) (1) Data frame handling\nI0204 13:14:37.114667 2201 log.go:181] (0xc000154000) (1) Data frame sent\nI0204 13:14:37.114792 2201 log.go:181] (0xc00001c420) (0xc000154000) Stream removed, broadcasting: 1\nI0204 13:14:37.114829 2201 log.go:181] (0xc00001c420) Go away received\nI0204 13:14:37.118725 2201 log.go:181] (0xc00001c420) (0xc000154000) Stream removed, broadcasting: 1\nI0204 13:14:37.118750 2201 log.go:181] (0xc00001c420) (0xc0009a2500) Stream removed, broadcasting: 3\nI0204 13:14:37.118760 2201 log.go:181] (0xc00001c420) (0xc0005c4000) Stream removed, broadcasting: 5\n" Feb 4 13:14:37.125: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:14:37.125: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:14:37.129: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 4 13:14:47.133: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:14:47.133: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:14:47.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999468s Feb 4 13:14:48.158: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992504092s Feb 4 13:14:49.164: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987271302s Feb 4 13:14:50.167: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981441287s Feb 4 13:14:51.208: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978052577s Feb 4 13:14:52.212: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.93781949s Feb 4 13:14:53.217: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.933458731s Feb 4 13:14:54.221: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.92805835s Feb 4 13:14:55.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.924153328s Feb 4 13:14:56.231: INFO: Verifying statefulset ss doesn't scale past 1 for another 918.902704ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8351 Feb 4 13:14:57.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:14:57.498: INFO: stderr: "I0204 13:14:57.410718 2219 log.go:181] (0xc00018d8c0) (0xc0006ac960) Create stream\nI0204 13:14:57.411162 2219 log.go:181] (0xc00018d8c0) (0xc0006ac960) Stream added, broadcasting: 1\nI0204 13:14:57.415089 2219 log.go:181] (0xc00018d8c0) Reply frame received for 1\nI0204 13:14:57.415133 2219 log.go:181] (0xc00018d8c0) (0xc0006ac000) Create stream\nI0204 13:14:57.415144 2219 log.go:181] (0xc00018d8c0) (0xc0006ac000) Stream added, broadcasting: 3\nI0204 13:14:57.415885 2219 log.go:181] (0xc00018d8c0) Reply frame received for 3\nI0204 13:14:57.415912 2219 log.go:181] (0xc00018d8c0) (0xc000383e00) Create stream\nI0204 13:14:57.415920 2219 log.go:181] (0xc00018d8c0) (0xc000383e00) Stream added, broadcasting: 5\nI0204 13:14:57.416675 2219 log.go:181] (0xc00018d8c0) Reply frame received for 5\nI0204 13:14:57.490971 2219 log.go:181] (0xc00018d8c0) Data frame received for 3\nI0204 13:14:57.491012 2219 log.go:181] (0xc0006ac000) (3) Data frame handling\nI0204 13:14:57.491028 2219 log.go:181] (0xc0006ac000) (3) Data frame sent\nI0204 13:14:57.491039 2219 log.go:181] (0xc00018d8c0) Data frame received for 3\nI0204 13:14:57.491049 2219 log.go:181] (0xc0006ac000) (3) Data frame handling\nI0204 13:14:57.491109 2219 log.go:181] (0xc00018d8c0) Data frame received for 5\nI0204 13:14:57.491148 2219 log.go:181] (0xc000383e00) (5) Data frame handling\nI0204 13:14:57.491171 2219 log.go:181] (0xc000383e00) (5) Data frame sent\nI0204 13:14:57.491187 2219 log.go:181] (0xc00018d8c0) Data frame received for 5\nI0204 13:14:57.491203 2219 log.go:181] (0xc000383e00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0204 13:14:57.492976 2219 log.go:181] (0xc00018d8c0) Data frame received for 1\nI0204 13:14:57.492993 2219 log.go:181] (0xc0006ac960) (1) Data frame handling\nI0204 13:14:57.493003 2219 log.go:181] (0xc0006ac960) (1) Data frame sent\nI0204 13:14:57.493010 2219 log.go:181] (0xc00018d8c0) (0xc0006ac960) Stream removed, broadcasting: 1\nI0204 13:14:57.493228 2219 log.go:181] (0xc00018d8c0) Go away received\nI0204 13:14:57.493311 2219 log.go:181] (0xc00018d8c0) (0xc0006ac960) Stream removed, broadcasting: 1\nI0204 13:14:57.493336 2219 log.go:181] (0xc00018d8c0) (0xc0006ac000) Stream removed, broadcasting: 3\nI0204 13:14:57.493342 2219 log.go:181] (0xc00018d8c0) (0xc000383e00) Stream removed, broadcasting: 5\n" Feb 4 13:14:57.498: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 13:14:57.498: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 13:14:57.503: INFO: Found 1 stateful pods, waiting for 3 Feb 4 13:15:07.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:15:07.831: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:15:07.831: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Feb 4 13:15:17.507: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:15:17.507: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 13:15:17.507: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 4 13:15:17.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:15:17.717: INFO: stderr: "I0204 13:15:17.643589 2237 log.go:181] (0xc000195130) (0xc00068f0e0) Create stream\nI0204 13:15:17.643636 2237 log.go:181] (0xc000195130) (0xc00068f0e0) Stream added, broadcasting: 1\nI0204 13:15:17.645080 2237 log.go:181] (0xc000195130) Reply frame received for 1\nI0204 13:15:17.645111 2237 log.go:181] (0xc000195130) (0xc000792780) Create stream\nI0204 13:15:17.645121 2237 log.go:181] (0xc000195130) (0xc000792780) Stream added, broadcasting: 3\nI0204 13:15:17.645934 2237 log.go:181] (0xc000195130) Reply frame received for 3\nI0204 13:15:17.645964 2237 log.go:181] (0xc000195130) (0xc00037e3c0) Create stream\nI0204 13:15:17.645980 2237 log.go:181] (0xc000195130) (0xc00037e3c0) Stream added, broadcasting: 5\nI0204 13:15:17.646752 2237 log.go:181] (0xc000195130) Reply frame received for 5\nI0204 13:15:17.710841 2237 log.go:181] (0xc000195130) Data frame received for 3\nI0204 13:15:17.710889 2237 log.go:181] (0xc000792780) (3) Data frame handling\nI0204 13:15:17.710910 2237 log.go:181] (0xc000792780) (3) Data frame sent\nI0204 13:15:17.710924 2237 log.go:181] (0xc000195130) Data frame received for 3\nI0204 13:15:17.710935 2237 log.go:181] (0xc000792780) (3) Data frame handling\nI0204 13:15:17.710978 2237 log.go:181] (0xc000195130) Data frame received for 5\nI0204 13:15:17.711023 2237 log.go:181] (0xc00037e3c0) (5) Data frame handling\nI0204 13:15:17.711054 2237 log.go:181] (0xc00037e3c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:15:17.711072 2237 log.go:181] (0xc000195130) Data frame received for 5\nI0204 13:15:17.711085 2237 log.go:181] (0xc00037e3c0) (5) Data frame handling\nI0204 13:15:17.712232 2237 log.go:181] (0xc000195130) Data frame received for 1\nI0204 13:15:17.712267 2237 log.go:181] (0xc00068f0e0) (1) Data frame handling\nI0204 13:15:17.712294 2237 log.go:181] (0xc00068f0e0) (1) Data frame sent\nI0204 13:15:17.712313 2237 log.go:181] (0xc000195130) (0xc00068f0e0) Stream removed, broadcasting: 1\nI0204 13:15:17.712383 2237 log.go:181] (0xc000195130) Go away received\nI0204 13:15:17.712771 2237 log.go:181] (0xc000195130) (0xc00068f0e0) Stream removed, broadcasting: 1\nI0204 13:15:17.712799 2237 log.go:181] (0xc000195130) (0xc000792780) Stream removed, broadcasting: 3\nI0204 13:15:17.712818 2237 log.go:181] (0xc000195130) (0xc00037e3c0) Stream removed, broadcasting: 5\n" Feb 4 13:15:17.717: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:15:17.717: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:15:17.717: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:15:17.938: INFO: stderr: "I0204 13:15:17.842568 2254 log.go:181] (0xc0001ae370) (0xc0006ac0a0) Create stream\nI0204 13:15:17.842624 2254 log.go:181] (0xc0001ae370) (0xc0006ac0a0) Stream added, broadcasting: 1\nI0204 13:15:17.843981 2254 log.go:181] (0xc0001ae370) Reply frame received for 1\nI0204 13:15:17.844035 2254 log.go:181] (0xc0001ae370) (0xc0006ac8c0) Create stream\nI0204 13:15:17.844092 2254 log.go:181] (0xc0001ae370) (0xc0006ac8c0) Stream added, broadcasting: 3\nI0204 13:15:17.845097 2254 log.go:181] (0xc0001ae370) Reply frame received for 3\nI0204 13:15:17.845138 2254 log.go:181] (0xc0001ae370) (0xc0006ad0e0) Create stream\nI0204 13:15:17.845148 2254 log.go:181] (0xc0001ae370) (0xc0006ad0e0) Stream added, broadcasting: 5\nI0204 13:15:17.845862 2254 log.go:181] (0xc0001ae370) Reply frame received for 5\nI0204 13:15:17.895132 2254 log.go:181] (0xc0001ae370) Data frame received for 5\nI0204 13:15:17.895170 2254 log.go:181] (0xc0006ad0e0) (5) Data frame handling\nI0204 13:15:17.895192 2254 log.go:181] (0xc0006ad0e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:15:17.928071 2254 log.go:181] (0xc0001ae370) Data frame received for 3\nI0204 13:15:17.928121 2254 log.go:181] (0xc0006ac8c0) (3) Data frame handling\nI0204 13:15:17.928159 2254 log.go:181] (0xc0006ac8c0) (3) Data frame sent\nI0204 13:15:17.928339 2254 log.go:181] (0xc0001ae370) Data frame received for 5\nI0204 13:15:17.928374 2254 log.go:181] (0xc0006ad0e0) (5) Data frame handling\nI0204 13:15:17.928430 2254 log.go:181] (0xc0001ae370) Data frame received for 3\nI0204 13:15:17.928463 2254 log.go:181] (0xc0006ac8c0) (3) Data frame handling\nI0204 13:15:17.930741 2254 log.go:181] (0xc0001ae370) Data frame received for 1\nI0204 13:15:17.930773 2254 log.go:181] (0xc0006ac0a0) (1) Data frame handling\nI0204 13:15:17.930800 2254 log.go:181] (0xc0006ac0a0) (1) Data frame sent\nI0204 13:15:17.930827 2254 log.go:181] (0xc0001ae370) (0xc0006ac0a0) Stream removed, broadcasting: 1\nI0204 13:15:17.930851 2254 log.go:181] (0xc0001ae370) Go away received\nI0204 13:15:17.931248 2254 log.go:181] (0xc0001ae370) (0xc0006ac0a0) Stream removed, broadcasting: 1\nI0204 13:15:17.931270 2254 log.go:181] (0xc0001ae370) (0xc0006ac8c0) Stream removed, broadcasting: 3\nI0204 13:15:17.931282 2254 log.go:181] (0xc0001ae370) (0xc0006ad0e0) Stream removed, broadcasting: 5\n" Feb 4 13:15:17.938: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:15:17.938: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:15:17.938: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 13:15:18.177: INFO: stderr: "I0204 13:15:18.066878 2273 log.go:181] (0xc000b52000) (0xc00057c1e0) Create stream\nI0204 13:15:18.066935 2273 log.go:181] (0xc000b52000) (0xc00057c1e0) Stream added, broadcasting: 1\nI0204 13:15:18.075997 2273 log.go:181] (0xc000b52000) Reply frame received for 1\nI0204 13:15:18.076051 2273 log.go:181] (0xc000b52000) (0xc0003b9360) Create stream\nI0204 13:15:18.076064 2273 log.go:181] (0xc000b52000) (0xc0003b9360) Stream added, broadcasting: 3\nI0204 13:15:18.077030 2273 log.go:181] (0xc000b52000) Reply frame received for 3\nI0204 13:15:18.077129 2273 log.go:181] (0xc000b52000) (0xc000536320) Create stream\nI0204 13:15:18.077188 2273 log.go:181] (0xc000b52000) (0xc000536320) Stream added, broadcasting: 5\nI0204 13:15:18.077995 2273 log.go:181] (0xc000b52000) Reply frame received for 5\nI0204 13:15:18.128320 2273 log.go:181] (0xc000b52000) Data frame received for 5\nI0204 13:15:18.128354 2273 log.go:181] (0xc000536320) (5) Data frame handling\nI0204 13:15:18.128378 2273 log.go:181] (0xc000536320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 13:15:18.168208 2273 log.go:181] (0xc000b52000) Data frame received for 3\nI0204 13:15:18.168230 2273 log.go:181] (0xc0003b9360) (3) Data frame handling\nI0204 13:15:18.168246 2273 log.go:181] (0xc0003b9360) (3) Data frame sent\nI0204 13:15:18.168431 2273 log.go:181] (0xc000b52000) Data frame received for 3\nI0204 13:15:18.168451 2273 log.go:181] (0xc0003b9360) (3) Data frame handling\nI0204 13:15:18.168631 2273 log.go:181] (0xc000b52000) Data frame received for 5\nI0204 13:15:18.168665 2273 log.go:181] (0xc000536320) (5) Data frame handling\nI0204 13:15:18.170856 2273 log.go:181] (0xc000b52000) Data frame received for 1\nI0204 13:15:18.170881 2273 log.go:181] (0xc00057c1e0) (1) Data frame handling\nI0204 13:15:18.170895 2273 log.go:181] (0xc00057c1e0) (1) Data frame sent\nI0204 13:15:18.170956 2273 log.go:181] (0xc000b52000) (0xc00057c1e0) Stream removed, broadcasting: 1\nI0204 13:15:18.171004 2273 log.go:181] (0xc000b52000) Go away received\nI0204 13:15:18.171425 2273 log.go:181] (0xc000b52000) (0xc00057c1e0) Stream removed, broadcasting: 1\nI0204 13:15:18.171451 2273 log.go:181] (0xc000b52000) (0xc0003b9360) Stream removed, broadcasting: 3\nI0204 13:15:18.171470 2273 log.go:181] (0xc000b52000) (0xc000536320) Stream removed, broadcasting: 5\n" Feb 4 13:15:18.177: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 13:15:18.177: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 13:15:18.177: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:15:18.185: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Feb 4 13:15:28.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:15:28.245: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:15:28.245: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 4 13:15:28.345: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999969s Feb 4 13:15:29.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.951646274s Feb 4 13:15:30.368: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.932582711s Feb 4 13:15:31.373: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.92823166s Feb 4 13:15:32.378: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.923369352s Feb 4 13:15:33.385: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.918194649s Feb 4 13:15:34.405: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.912034691s Feb 4 13:15:35.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.891651895s Feb 4 13:15:36.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.886263951s Feb 4 13:15:37.440: INFO: Verifying statefulset ss doesn't scale past 3 for another 861.676094ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8351 Feb 4 13:15:38.452: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:15:38.737: INFO: stderr: "I0204 13:15:38.668648 2291 log.go:181] (0xc00003a420) (0xc000437d60) Create stream\nI0204 13:15:38.668724 2291 log.go:181] (0xc00003a420) (0xc000437d60) Stream added, broadcasting: 1\nI0204 13:15:38.670347 2291 log.go:181] (0xc00003a420) Reply frame received for 1\nI0204 13:15:38.670391 2291 log.go:181] (0xc00003a420) (0xc0006bc460) Create stream\nI0204 13:15:38.670401 2291 log.go:181] (0xc00003a420) (0xc0006bc460) Stream added, broadcasting: 3\nI0204 13:15:38.671401 2291 log.go:181] (0xc00003a420) Reply frame received for 3\nI0204 13:15:38.671461 2291 log.go:181] (0xc00003a420) (0xc000b5e1e0) Create stream\nI0204 13:15:38.671476 2291 log.go:181] (0xc00003a420) (0xc000b5e1e0) Stream added, broadcasting: 5\nI0204 13:15:38.672368 2291 log.go:181] (0xc00003a420) Reply frame received for 5\nI0204 13:15:38.729825 2291 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 13:15:38.729858 2291 log.go:181] (0xc000b5e1e0) (5) Data frame handling\nI0204 13:15:38.729876 2291 log.go:181] (0xc000b5e1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0204 13:15:38.729914 2291 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 13:15:38.729923 2291 log.go:181] (0xc000b5e1e0) (5) Data frame handling\nI0204 13:15:38.730018 2291 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 13:15:38.730028 2291 log.go:181] (0xc0006bc460) (3) Data frame handling\nI0204 13:15:38.730035 2291 log.go:181] (0xc0006bc460) (3) Data frame sent\nI0204 13:15:38.730040 2291 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 13:15:38.730044 2291 log.go:181] (0xc0006bc460) (3) Data frame handling\nI0204 13:15:38.731099 2291 log.go:181] (0xc00003a420) Data frame received for 1\nI0204 13:15:38.731196 2291 log.go:181] (0xc000437d60) (1) Data frame handling\nI0204 13:15:38.731219 2291 log.go:181] (0xc000437d60) (1) Data frame sent\nI0204 13:15:38.731239 2291 log.go:181] (0xc00003a420) (0xc000437d60) Stream removed, broadcasting: 1\nI0204 13:15:38.731282 2291 log.go:181] (0xc00003a420) Go away received\nI0204 13:15:38.731593 2291 log.go:181] (0xc00003a420) (0xc000437d60) Stream removed, broadcasting: 1\nI0204 13:15:38.731615 2291 log.go:181] (0xc00003a420) (0xc0006bc460) Stream removed, broadcasting: 3\nI0204 13:15:38.731629 2291 log.go:181] (0xc00003a420) (0xc000b5e1e0) Stream removed, broadcasting: 5\n" Feb 4 13:15:38.737: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 13:15:38.737: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 13:15:38.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:15:38.941: INFO: stderr: "I0204 13:15:38.868719 2309 log.go:181] (0xc0000ec370) (0xc0008be0a0) Create stream\nI0204 13:15:38.868817 2309 log.go:181] (0xc0000ec370) (0xc0008be0a0) Stream added, broadcasting: 1\nI0204 13:15:38.870972 2309 log.go:181] (0xc0000ec370) Reply frame received for 1\nI0204 13:15:38.871027 2309 log.go:181] (0xc0000ec370) (0xc00031adc0) Create stream\nI0204 13:15:38.871044 2309 log.go:181] (0xc0000ec370) (0xc00031adc0) Stream added, broadcasting: 3\nI0204 13:15:38.872050 2309 log.go:181] (0xc0000ec370) Reply frame received for 3\nI0204 13:15:38.872086 2309 log.go:181] (0xc0000ec370) (0xc000aae1e0) Create stream\nI0204 13:15:38.872096 2309 log.go:181] (0xc0000ec370) (0xc000aae1e0) Stream added, broadcasting: 5\nI0204 13:15:38.873023 2309 log.go:181] (0xc0000ec370) Reply frame received for 5\nI0204 13:15:38.933304 2309 log.go:181] (0xc0000ec370) Data frame received for 3\nI0204 13:15:38.933340 2309 log.go:181] (0xc00031adc0) (3) Data frame handling\nI0204 13:15:38.933358 2309 log.go:181] (0xc00031adc0) (3) Data frame sent\nI0204 13:15:38.933367 2309 log.go:181] (0xc0000ec370) Data frame received for 3\nI0204 13:15:38.933385 2309 log.go:181] (0xc00031adc0) (3) Data frame handling\nI0204 13:15:38.933400 2309 log.go:181] (0xc0000ec370) Data frame received for 5\nI0204 13:15:38.933409 2309 log.go:181] (0xc000aae1e0) (5) Data frame handling\nI0204 13:15:38.933416 2309 log.go:181] (0xc000aae1e0) (5) Data frame sent\nI0204 13:15:38.933421 2309 log.go:181] (0xc0000ec370) Data frame received for 5\nI0204 13:15:38.933426 2309 log.go:181] (0xc000aae1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0204 13:15:38.934778 2309 log.go:181] (0xc0000ec370) Data frame received for 1\nI0204 13:15:38.934801 2309 log.go:181] (0xc0008be0a0) (1) Data frame handling\nI0204 13:15:38.934813 2309 log.go:181] (0xc0008be0a0) (1) Data frame sent\nI0204 13:15:38.934825 2309 log.go:181] (0xc0000ec370) (0xc0008be0a0) Stream removed, broadcasting: 1\nI0204 13:15:38.934872 2309 log.go:181] (0xc0000ec370) Go away received\nI0204 13:15:38.935161 2309 log.go:181] (0xc0000ec370) (0xc0008be0a0) Stream removed, broadcasting: 1\nI0204 13:15:38.935177 2309 log.go:181] (0xc0000ec370) (0xc00031adc0) Stream removed, broadcasting: 3\nI0204 13:15:38.935183 2309 log.go:181] (0xc0000ec370) (0xc000aae1e0) Stream removed, broadcasting: 5\n" Feb 4 13:15:38.941: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 13:15:38.941: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 13:15:38.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-8351 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 13:15:39.144: INFO: stderr: "I0204 13:15:39.063716 2327 log.go:181] (0xc0000e80b0) (0xc0007ed0e0) Create stream\nI0204 13:15:39.063820 2327 log.go:181] (0xc0000e80b0) (0xc0007ed0e0) Stream added, broadcasting: 1\nI0204 13:15:39.070190 2327 log.go:181] (0xc0000e80b0) Reply frame received for 1\nI0204 13:15:39.070234 2327 log.go:181] (0xc0000e80b0) (0xc000bd43c0) Create stream\nI0204 13:15:39.070249 2327 log.go:181] (0xc0000e80b0) (0xc000bd43c0) Stream added, broadcasting: 3\nI0204 13:15:39.075505 2327 log.go:181] (0xc0000e80b0) Reply frame received for 3\nI0204 13:15:39.075542 2327 log.go:181] (0xc0000e80b0) (0xc0007edd60) Create stream\nI0204 13:15:39.075551 2327 log.go:181] (0xc0000e80b0) (0xc0007edd60) Stream added, broadcasting: 5\nI0204 13:15:39.078225 2327 log.go:181] (0xc0000e80b0) Reply frame received for 5\nI0204 13:15:39.128228 2327 log.go:181] (0xc0000e80b0) Data frame received for 5\nI0204 13:15:39.128252 2327 log.go:181] (0xc0007edd60) (5) Data frame handling\nI0204 13:15:39.128268 2327 log.go:181] (0xc0007edd60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0204 13:15:39.138896 2327 log.go:181] (0xc0000e80b0) Data frame received for 3\nI0204 13:15:39.138915 2327 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI0204 13:15:39.138931 2327 log.go:181] (0xc000bd43c0) (3) Data frame sent\nI0204 13:15:39.138937 2327 log.go:181] (0xc0000e80b0) Data frame received for 3\nI0204 13:15:39.138943 2327 log.go:181] (0xc000bd43c0) (3) Data frame handling\nI0204 13:15:39.138984 2327 log.go:181] (0xc0000e80b0) Data frame received for 5\nI0204 13:15:39.138999 2327 log.go:181] (0xc0007edd60) (5) Data frame handling\nI0204 13:15:39.140053 2327 log.go:181] (0xc0000e80b0) Data frame received for 1\nI0204 13:15:39.140125 2327 log.go:181] (0xc0007ed0e0) (1) Data frame handling\nI0204 13:15:39.140153 2327 log.go:181] (0xc0007ed0e0) (1) Data frame sent\nI0204 13:15:39.140179 2327 log.go:181] (0xc0000e80b0) (0xc0007ed0e0) Stream removed, broadcasting: 1\nI0204 13:15:39.140246 2327 log.go:181] (0xc0000e80b0) Go away received\nI0204 13:15:39.140591 2327 log.go:181] (0xc0000e80b0) (0xc0007ed0e0) Stream removed, broadcasting: 1\nI0204 13:15:39.140612 2327 log.go:181] (0xc0000e80b0) (0xc000bd43c0) Stream removed, broadcasting: 3\nI0204 13:15:39.140624 2327 log.go:181] (0xc0000e80b0) (0xc0007edd60) Stream removed, broadcasting: 5\n" Feb 4 13:15:39.145: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 13:15:39.145: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 13:15:39.145: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 4 13:17:09.261: INFO: Deleting all statefulset in ns statefulset-8351 Feb 4 13:17:09.264: INFO: Scaling statefulset ss to 0 Feb 4 13:17:09.274: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:17:09.276: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:17:09.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8351" for this suite. • [SLOW TEST:162.606 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":311,"completed":109,"skipped":1770,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:17:09.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-1872 Feb 4 13:17:13.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Feb 4 13:17:13.712: INFO: stderr: "I0204 13:17:13.607901 2344 log.go:181] (0xc00001d130) (0xc000730780) Create stream\nI0204 13:17:13.607952 2344 log.go:181] (0xc00001d130) (0xc000730780) Stream added, broadcasting: 1\nI0204 13:17:13.609755 2344 log.go:181] (0xc00001d130) Reply frame received for 1\nI0204 13:17:13.609804 2344 log.go:181] (0xc00001d130) (0xc000962280) Create stream\nI0204 13:17:13.609817 2344 log.go:181] (0xc00001d130) (0xc000962280) Stream added, broadcasting: 3\nI0204 13:17:13.610950 2344 log.go:181] (0xc00001d130) Reply frame received for 3\nI0204 13:17:13.611001 2344 log.go:181] (0xc00001d130) (0xc0001205a0) Create stream\nI0204 13:17:13.611016 2344 log.go:181] (0xc00001d130) (0xc0001205a0) Stream added, broadcasting: 5\nI0204 13:17:13.611889 2344 log.go:181] (0xc00001d130) Reply frame received for 5\nI0204 13:17:13.702280 2344 log.go:181] (0xc00001d130) Data frame received for 5\nI0204 13:17:13.702309 2344 log.go:181] (0xc0001205a0) (5) Data frame handling\nI0204 13:17:13.702321 2344 log.go:181] (0xc0001205a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0204 13:17:13.702543 2344 log.go:181] (0xc00001d130) Data frame received for 3\nI0204 13:17:13.702556 2344 log.go:181] (0xc000962280) (3) Data frame handling\nI0204 13:17:13.702572 2344 log.go:181] (0xc000962280) (3) Data frame sent\nI0204 13:17:13.703330 2344 log.go:181] (0xc00001d130) Data frame received for 3\nI0204 13:17:13.703350 2344 log.go:181] (0xc000962280) (3) Data frame handling\nI0204 13:17:13.703603 2344 log.go:181] (0xc00001d130) Data frame received for 5\nI0204 13:17:13.703637 2344 log.go:181] (0xc0001205a0) (5) Data frame handling\nI0204 13:17:13.705720 2344 log.go:181] (0xc00001d130) Data frame received for 1\nI0204 13:17:13.705734 2344 log.go:181] (0xc000730780) (1) Data frame handling\nI0204 13:17:13.705744 2344 log.go:181] (0xc000730780) (1) Data frame sent\nI0204 13:17:13.705754 2344 log.go:181] (0xc00001d130) (0xc000730780) Stream removed, broadcasting: 1\nI0204 13:17:13.705764 2344 log.go:181] (0xc00001d130) Go away received\nI0204 13:17:13.706017 2344 log.go:181] (0xc00001d130) (0xc000730780) Stream removed, broadcasting: 1\nI0204 13:17:13.706030 2344 log.go:181] (0xc00001d130) (0xc000962280) Stream removed, broadcasting: 3\nI0204 13:17:13.706035 2344 log.go:181] (0xc00001d130) (0xc0001205a0) Stream removed, broadcasting: 5\n" Feb 4 13:17:13.712: INFO: stdout: "iptables" Feb 4 13:17:13.712: INFO: proxyMode: iptables Feb 4 13:17:13.753: INFO: Waiting for pod kube-proxy-mode-detector to disappear Feb 4 13:17:13.771: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-1872 STEP: creating replication controller affinity-nodeport-timeout in namespace services-1872 I0204 13:17:14.064817 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-1872, replica count: 3 I0204 13:17:17.115316 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:17:20.115658 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 13:17:20.126: INFO: Creating new exec pod Feb 4 13:17:25.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec execpod-affinityvf79n -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Feb 4 13:17:25.418: INFO: stderr: "I0204 13:17:25.324240 2363 log.go:181] (0xc000d9c000) (0xc000d94320) Create stream\nI0204 13:17:25.324344 2363 log.go:181] (0xc000d9c000) (0xc000d94320) Stream added, broadcasting: 1\nI0204 13:17:25.326365 2363 log.go:181] (0xc000d9c000) Reply frame received for 1\nI0204 13:17:25.326428 2363 log.go:181] (0xc000d9c000) (0xc000988000) Create stream\nI0204 13:17:25.326468 2363 log.go:181] (0xc000d9c000) (0xc000988000) Stream added, broadcasting: 3\nI0204 13:17:25.327495 2363 log.go:181] (0xc000d9c000) Reply frame received for 3\nI0204 13:17:25.327530 2363 log.go:181] (0xc000d9c000) (0xc000a0e0a0) Create stream\nI0204 13:17:25.327543 2363 log.go:181] (0xc000d9c000) (0xc000a0e0a0) Stream added, broadcasting: 5\nI0204 13:17:25.328675 2363 log.go:181] (0xc000d9c000) Reply frame received for 5\nI0204 13:17:25.411123 2363 log.go:181] (0xc000d9c000) Data frame received for 3\nI0204 13:17:25.411163 2363 log.go:181] (0xc000988000) (3) Data frame handling\nI0204 13:17:25.411203 2363 log.go:181] (0xc000d9c000) Data frame received for 5\nI0204 13:17:25.411211 2363 log.go:181] (0xc000a0e0a0) (5) Data frame handling\nI0204 13:17:25.411219 2363 log.go:181] (0xc000a0e0a0) (5) Data frame sent\nI0204 13:17:25.411226 2363 log.go:181] (0xc000d9c000) Data frame received for 5\nI0204 13:17:25.411231 2363 log.go:181] (0xc000a0e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0204 13:17:25.412741 2363 log.go:181] (0xc000d9c000) Data frame received for 1\nI0204 13:17:25.412768 2363 log.go:181] (0xc000d94320) (1) Data frame handling\nI0204 13:17:25.412796 2363 log.go:181] (0xc000d94320) (1) Data frame sent\nI0204 13:17:25.412814 2363 log.go:181] (0xc000d9c000) (0xc000d94320) Stream removed, broadcasting: 1\nI0204 13:17:25.412930 2363 log.go:181] (0xc000d9c000) Go away received\nI0204 13:17:25.413370 2363 log.go:181] (0xc000d9c000) (0xc000d94320) Stream removed, broadcasting: 1\nI0204 13:17:25.413392 2363 log.go:181] (0xc000d9c000) (0xc000988000) Stream removed, broadcasting: 3\nI0204 13:17:25.413402 2363 log.go:181] (0xc000d9c000) (0xc000a0e0a0) Stream removed, broadcasting: 5\n" Feb 4 13:17:25.418: INFO: stdout: "" Feb 4 13:17:25.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec execpod-affinityvf79n -- /bin/sh -x -c nc -zv -t -w 2 10.96.237.240 80' Feb 4 13:17:25.644: INFO: stderr: "I0204 13:17:25.551146 2382 log.go:181] (0xc00062a420) (0xc0005a4000) Create stream\nI0204 13:17:25.551223 2382 log.go:181] (0xc00062a420) (0xc0005a4000) Stream added, broadcasting: 1\nI0204 13:17:25.553817 2382 log.go:181] (0xc00062a420) Reply frame received for 1\nI0204 13:17:25.553878 2382 log.go:181] (0xc00062a420) (0xc0005a40a0) Create stream\nI0204 13:17:25.553905 2382 log.go:181] (0xc00062a420) (0xc0005a40a0) Stream added, broadcasting: 3\nI0204 13:17:25.555633 2382 log.go:181] (0xc00062a420) Reply frame received for 3\nI0204 13:17:25.555668 2382 log.go:181] (0xc00062a420) (0xc0005a4140) Create stream\nI0204 13:17:25.555677 2382 log.go:181] (0xc00062a420) (0xc0005a4140) Stream added, broadcasting: 5\nI0204 13:17:25.556782 2382 log.go:181] (0xc00062a420) Reply frame received for 5\nI0204 13:17:25.627287 2382 log.go:181] (0xc00062a420) Data frame received for 3\nI0204 13:17:25.627329 2382 log.go:181] (0xc0005a40a0) (3) Data frame handling\nI0204 13:17:25.627587 2382 log.go:181] (0xc00062a420) Data frame received for 5\nI0204 13:17:25.627628 2382 log.go:181] (0xc0005a4140) (5) Data frame handling\nI0204 13:17:25.627648 2382 log.go:181] (0xc0005a4140) (5) Data frame sent\nI0204 13:17:25.627663 2382 log.go:181] (0xc00062a420) Data frame received for 5\nI0204 13:17:25.627676 2382 log.go:181] (0xc0005a4140) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.237.240 80\nConnection to 10.96.237.240 80 port [tcp/http] succeeded!\nI0204 13:17:25.638347 2382 log.go:181] (0xc00062a420) Data frame received for 1\nI0204 13:17:25.638377 2382 log.go:181] (0xc0005a4000) (1) Data frame handling\nI0204 13:17:25.638389 2382 log.go:181] (0xc0005a4000) (1) Data frame sent\nI0204 13:17:25.638402 2382 log.go:181] (0xc00062a420) (0xc0005a4000) Stream removed, broadcasting: 1\nI0204 13:17:25.638422 2382 log.go:181] (0xc00062a420) Go away received\nI0204 13:17:25.638737 2382 log.go:181] (0xc00062a420) (0xc0005a4000) Stream removed, broadcasting: 1\nI0204 13:17:25.638754 2382 log.go:181] (0xc00062a420) (0xc0005a40a0) Stream removed, broadcasting: 3\nI0204 13:17:25.638760 2382 log.go:181] (0xc00062a420) (0xc0005a4140) Stream removed, broadcasting: 5\n" Feb 4 13:17:25.644: INFO: stdout: "" Feb 4 13:17:25.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec execpod-affinityvf79n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32132' Feb 4 13:17:25.841: INFO: stderr: "I0204 13:17:25.774903 2400 log.go:181] (0xc000028370) (0xc00084d540) Create stream\nI0204 13:17:25.774988 2400 log.go:181] (0xc000028370) (0xc00084d540) Stream added, broadcasting: 1\nI0204 13:17:25.777053 2400 log.go:181] (0xc000028370) Reply frame received for 1\nI0204 13:17:25.777123 2400 log.go:181] (0xc000028370) (0xc00069c140) Create stream\nI0204 13:17:25.777154 2400 log.go:181] (0xc000028370) (0xc00069c140) Stream added, broadcasting: 3\nI0204 13:17:25.778152 2400 log.go:181] (0xc000028370) Reply frame received for 3\nI0204 13:17:25.778180 2400 log.go:181] (0xc000028370) (0xc00084dae0) Create stream\nI0204 13:17:25.778191 2400 log.go:181] (0xc000028370) (0xc00084dae0) Stream added, broadcasting: 5\nI0204 13:17:25.779182 2400 log.go:181] (0xc000028370) Reply frame received for 5\nI0204 13:17:25.833180 2400 log.go:181] (0xc000028370) Data frame received for 5\nI0204 13:17:25.833211 2400 log.go:181] (0xc00084dae0) (5) Data frame handling\nI0204 13:17:25.833245 2400 log.go:181] (0xc00084dae0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 32132\nConnection to 172.18.0.14 32132 port [tcp/*] succeeded!\nI0204 13:17:25.833399 2400 log.go:181] (0xc000028370) Data frame received for 3\nI0204 13:17:25.833439 2400 log.go:181] (0xc00069c140) (3) Data frame handling\nI0204 13:17:25.833503 2400 log.go:181] (0xc000028370) Data frame received for 5\nI0204 13:17:25.833527 2400 log.go:181] (0xc00084dae0) (5) Data frame handling\nI0204 13:17:25.834869 2400 log.go:181] (0xc000028370) Data frame received for 1\nI0204 13:17:25.834895 2400 log.go:181] (0xc00084d540) (1) Data frame handling\nI0204 13:17:25.834922 2400 log.go:181] (0xc00084d540) (1) Data frame sent\nI0204 13:17:25.834947 2400 log.go:181] (0xc000028370) (0xc00084d540) Stream removed, broadcasting: 1\nI0204 13:17:25.835100 2400 log.go:181] (0xc000028370) Go away received\nI0204 13:17:25.835642 2400 log.go:181] (0xc000028370) (0xc00084d540) Stream removed, broadcasting: 1\nI0204 13:17:25.835662 2400 log.go:181] (0xc000028370) (0xc00069c140) Stream removed, broadcasting: 3\nI0204 13:17:25.835670 2400 log.go:181] (0xc000028370) (0xc00084dae0) Stream removed, broadcasting: 5\n" Feb 4 13:17:25.841: INFO: stdout: "" Feb 4 13:17:25.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec execpod-affinityvf79n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 32132' Feb 4 13:17:26.081: INFO: stderr: "I0204 13:17:25.990222 2418 log.go:181] (0xc0001fc000) (0xc000a740a0) Create stream\nI0204 13:17:25.990317 2418 log.go:181] (0xc0001fc000) (0xc000a740a0) Stream added, broadcasting: 1\nI0204 13:17:25.992211 2418 log.go:181] (0xc0001fc000) Reply frame received for 1\nI0204 13:17:25.992260 2418 log.go:181] (0xc0001fc000) (0xc000a90000) Create stream\nI0204 13:17:25.992280 2418 log.go:181] (0xc0001fc000) (0xc000a90000) Stream added, broadcasting: 3\nI0204 13:17:25.993394 2418 log.go:181] (0xc0001fc000) Reply frame received for 3\nI0204 13:17:25.993442 2418 log.go:181] (0xc0001fc000) (0xc000a90960) Create stream\nI0204 13:17:25.993455 2418 log.go:181] (0xc0001fc000) (0xc000a90960) Stream added, broadcasting: 5\nI0204 13:17:25.994785 2418 log.go:181] (0xc0001fc000) Reply frame received for 5\nI0204 13:17:26.070239 2418 log.go:181] (0xc0001fc000) Data frame received for 5\nI0204 13:17:26.070303 2418 log.go:181] (0xc000a90960) (5) Data frame handling\nI0204 13:17:26.070324 2418 log.go:181] (0xc000a90960) (5) Data frame sent\nI0204 13:17:26.070338 2418 log.go:181] (0xc0001fc000) Data frame received for 5\nI0204 13:17:26.070348 2418 log.go:181] (0xc000a90960) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 32132\nConnection to 172.18.0.16 32132 port [tcp/*] succeeded!\nI0204 13:17:26.070392 2418 log.go:181] (0xc0001fc000) Data frame received for 3\nI0204 13:17:26.070445 2418 log.go:181] (0xc000a90000) (3) Data frame handling\nI0204 13:17:26.071831 2418 log.go:181] (0xc0001fc000) Data frame received for 1\nI0204 13:17:26.071848 2418 log.go:181] (0xc000a740a0) (1) Data frame handling\nI0204 13:17:26.071858 2418 log.go:181] (0xc000a740a0) (1) Data frame sent\nI0204 13:17:26.071868 2418 log.go:181] (0xc0001fc000) (0xc000a740a0) Stream removed, broadcasting: 1\nI0204 13:17:26.071878 2418 log.go:181] (0xc0001fc000) Go away received\nI0204 13:17:26.072631 2418 log.go:181] (0xc0001fc000) (0xc000a740a0) Stream removed, broadcasting: 1\nI0204 13:17:26.072669 2418 log.go:181] (0xc0001fc000) (0xc000a90000) Stream removed, broadcasting: 3\nI0204 13:17:26.072688 2418 log.go:181] (0xc0001fc000) (0xc000a90960) Stream removed, broadcasting: 5\n" Feb 4 13:17:26.081: INFO: stdout: "" Feb 4 13:17:26.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec execpod-affinityvf79n -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:32132/ ; done' Feb 4 13:17:26.399: INFO: stderr: "I0204 13:17:26.232322 2435 log.go:181] (0xc00003a0b0) (0xc000a3c000) Create stream\nI0204 13:17:26.232416 2435 log.go:181] (0xc00003a0b0) (0xc000a3c000) Stream added, broadcasting: 1\nI0204 13:17:26.234796 2435 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0204 13:17:26.234860 2435 log.go:181] (0xc00003a0b0) (0xc000428aa0) Create stream\nI0204 13:17:26.234889 2435 log.go:181] (0xc00003a0b0) (0xc000428aa0) Stream added, broadcasting: 3\nI0204 13:17:26.235988 2435 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0204 13:17:26.236014 2435 log.go:181] (0xc00003a0b0) (0xc00019cfa0) Create stream\nI0204 13:17:26.236021 2435 log.go:181] (0xc00003a0b0) (0xc00019cfa0) Stream added, broadcasting: 5\nI0204 13:17:26.237396 2435 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0204 13:17:26.306666 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.306717 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.306753 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.306785 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.306837 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.306873 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.311190 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.311213 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.311223 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.311230 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.311237 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.311254 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.311262 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.311269 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.311280 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.316803 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.316819 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.316829 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.317547 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.317566 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.317578 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.317709 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.317733 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.317747 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.321184 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.321203 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.321217 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.321810 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.321821 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.321830 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.321838 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.321844 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.321848 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.325234 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.325271 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.325312 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.325877 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.325887 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.325893 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.325928 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.325965 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.326006 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\nI0204 13:17:26.326030 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/I0204 13:17:26.326059 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.326085 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n\nI0204 13:17:26.329735 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.329747 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.329754 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.330372 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.330385 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.330393 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.330431 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.330465 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.330504 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.336767 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.336803 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.336951 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.337746 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.337759 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.337765 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.337773 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.337780 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.337785 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.344152 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.344175 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.344191 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.345126 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.345139 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.345150 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\nI0204 13:17:26.345155 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.345160 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.345182 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\nI0204 13:17:26.347674 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.347691 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.347709 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.348277 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.348292 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.348308 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.349254 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.349268 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.349280 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.349539 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.349549 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.349557 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.354717 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.354747 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.354783 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.355313 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.355340 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.355351 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.355369 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.355395 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.355408 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.361394 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.361417 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.361437 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.362405 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.362437 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.362452 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\nI0204 13:17:26.362467 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.362482 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.362510 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.362534 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.362545 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.362559 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\nI0204 13:17:26.366124 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.366156 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.366187 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.366891 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.366965 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.366981 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.366997 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.367008 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.367017 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.370907 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.370929 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.370940 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.371208 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.371233 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.371243 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.371258 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.371269 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.371284 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.373896 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.373907 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.373913 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.374382 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.374399 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.374408 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.374517 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.374550 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.374570 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.379131 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.379144 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.379150 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.379751 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.379778 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.379792 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.379810 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.379820 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.379830 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.384921 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.384946 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.384959 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.385588 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.385674 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.385698 2435 log.go:181] (0xc00019cfa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.385720 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.385738 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.385753 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.389626 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.389654 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.389692 2435 log.go:181] (0xc000428aa0) (3) Data frame sent\nI0204 13:17:26.390180 2435 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 13:17:26.390221 2435 log.go:181] (0xc00019cfa0) (5) Data frame handling\nI0204 13:17:26.390431 2435 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 13:17:26.390473 2435 log.go:181] (0xc000428aa0) (3) Data frame handling\nI0204 13:17:26.391813 2435 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0204 13:17:26.391838 2435 log.go:181] (0xc000a3c000) (1) Data frame handling\nI0204 13:17:26.391851 2435 log.go:181] (0xc000a3c000) (1) Data frame sent\nI0204 13:17:26.391866 2435 log.go:181] (0xc00003a0b0) (0xc000a3c000) Stream removed, broadcasting: 1\nI0204 13:17:26.391893 2435 log.go:181] (0xc00003a0b0) Go away received\nI0204 13:17:26.392468 2435 log.go:181] (0xc00003a0b0) (0xc000a3c000) Stream removed, broadcasting: 1\nI0204 13:17:26.392492 2435 log.go:181] (0xc00003a0b0) (0xc000428aa0) Stream removed, broadcasting: 3\nI0204 13:17:26.392504 2435 log.go:181] (0xc00003a0b0) (0xc00019cfa0) Stream removed, broadcasting: 5\n" Feb 4 13:17:26.399: INFO: stdout: "\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv\naffinity-nodeport-timeout-td6kv" Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Received response from host: affinity-nodeport-timeout-td6kv Feb 4 13:17:26.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec execpod-affinityvf79n -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32132/' Feb 4 13:17:26.603: INFO: stderr: "I0204 13:17:26.534035 2453 log.go:181] (0xc00003a420) (0xc000942000) Create stream\nI0204 13:17:26.534080 2453 log.go:181] (0xc00003a420) (0xc000942000) Stream added, broadcasting: 1\nI0204 13:17:26.535410 2453 log.go:181] (0xc00003a420) Reply frame received for 1\nI0204 13:17:26.535448 2453 log.go:181] (0xc00003a420) (0xc00053f860) Create stream\nI0204 13:17:26.535460 2453 log.go:181] (0xc00003a420) (0xc00053f860) Stream added, broadcasting: 3\nI0204 13:17:26.536151 2453 log.go:181] (0xc00003a420) Reply frame received for 3\nI0204 13:17:26.536189 2453 log.go:181] (0xc00003a420) (0xc000922000) Create stream\nI0204 13:17:26.536208 2453 log.go:181] (0xc00003a420) (0xc000922000) Stream added, broadcasting: 5\nI0204 13:17:26.537036 2453 log.go:181] (0xc00003a420) Reply frame received for 5\nI0204 13:17:26.591008 2453 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 13:17:26.591039 2453 log.go:181] (0xc000922000) (5) Data frame handling\nI0204 13:17:26.591062 2453 log.go:181] (0xc000922000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:26.594839 2453 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 13:17:26.594954 2453 log.go:181] (0xc00053f860) (3) Data frame handling\nI0204 13:17:26.595006 2453 log.go:181] (0xc00053f860) (3) Data frame sent\nI0204 13:17:26.595086 2453 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 13:17:26.595123 2453 log.go:181] (0xc000922000) (5) Data frame handling\nI0204 13:17:26.595156 2453 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 13:17:26.595173 2453 log.go:181] (0xc00053f860) (3) Data frame handling\nI0204 13:17:26.597398 2453 log.go:181] (0xc00003a420) Data frame received for 1\nI0204 13:17:26.597429 2453 log.go:181] (0xc000942000) (1) Data frame handling\nI0204 13:17:26.597450 2453 log.go:181] (0xc000942000) (1) Data frame sent\nI0204 13:17:26.597464 2453 log.go:181] (0xc00003a420) (0xc000942000) Stream removed, broadcasting: 1\nI0204 13:17:26.597478 2453 log.go:181] (0xc00003a420) Go away received\nI0204 13:17:26.597973 2453 log.go:181] (0xc00003a420) (0xc000942000) Stream removed, broadcasting: 1\nI0204 13:17:26.597997 2453 log.go:181] (0xc00003a420) (0xc00053f860) Stream removed, broadcasting: 3\nI0204 13:17:26.598021 2453 log.go:181] (0xc00003a420) (0xc000922000) Stream removed, broadcasting: 5\n" Feb 4 13:17:26.604: INFO: stdout: "affinity-nodeport-timeout-td6kv" Feb 4 13:17:46.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-1872 exec execpod-affinityvf79n -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:32132/' Feb 4 13:17:46.849: INFO: stderr: "I0204 13:17:46.746094 2471 log.go:181] (0xc000194370) (0xc0001a5ea0) Create stream\nI0204 13:17:46.746148 2471 log.go:181] (0xc000194370) (0xc0001a5ea0) Stream added, broadcasting: 1\nI0204 13:17:46.747811 2471 log.go:181] (0xc000194370) Reply frame received for 1\nI0204 13:17:46.747850 2471 log.go:181] (0xc000194370) (0xc00042ee60) Create stream\nI0204 13:17:46.747859 2471 log.go:181] (0xc000194370) (0xc00042ee60) Stream added, broadcasting: 3\nI0204 13:17:46.748646 2471 log.go:181] (0xc000194370) Reply frame received for 3\nI0204 13:17:46.748681 2471 log.go:181] (0xc000194370) (0xc00042f4a0) Create stream\nI0204 13:17:46.748689 2471 log.go:181] (0xc000194370) (0xc00042f4a0) Stream added, broadcasting: 5\nI0204 13:17:46.749438 2471 log.go:181] (0xc000194370) Reply frame received for 5\nI0204 13:17:46.840700 2471 log.go:181] (0xc000194370) Data frame received for 5\nI0204 13:17:46.840723 2471 log.go:181] (0xc00042f4a0) (5) Data frame handling\nI0204 13:17:46.840737 2471 log.go:181] (0xc00042f4a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:32132/\nI0204 13:17:46.841497 2471 log.go:181] (0xc000194370) Data frame received for 3\nI0204 13:17:46.841531 2471 log.go:181] (0xc00042ee60) (3) Data frame handling\nI0204 13:17:46.841562 2471 log.go:181] (0xc00042ee60) (3) Data frame sent\nI0204 13:17:46.841823 2471 log.go:181] (0xc000194370) Data frame received for 3\nI0204 13:17:46.841836 2471 log.go:181] (0xc00042ee60) (3) Data frame handling\nI0204 13:17:46.842064 2471 log.go:181] (0xc000194370) Data frame received for 5\nI0204 13:17:46.842088 2471 log.go:181] (0xc00042f4a0) (5) Data frame handling\nI0204 13:17:46.844093 2471 log.go:181] (0xc000194370) Data frame received for 1\nI0204 13:17:46.844112 2471 log.go:181] (0xc0001a5ea0) (1) Data frame handling\nI0204 13:17:46.844124 2471 log.go:181] (0xc0001a5ea0) (1) Data frame sent\nI0204 13:17:46.844135 2471 log.go:181] (0xc000194370) (0xc0001a5ea0) Stream removed, broadcasting: 1\nI0204 13:17:46.844155 2471 log.go:181] (0xc000194370) Go away received\nI0204 13:17:46.844533 2471 log.go:181] (0xc000194370) (0xc0001a5ea0) Stream removed, broadcasting: 1\nI0204 13:17:46.844552 2471 log.go:181] (0xc000194370) (0xc00042ee60) Stream removed, broadcasting: 3\nI0204 13:17:46.844563 2471 log.go:181] (0xc000194370) (0xc00042f4a0) Stream removed, broadcasting: 5\n" Feb 4 13:17:46.849: INFO: stdout: "affinity-nodeport-timeout-jjztk" Feb 4 13:17:46.849: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-1872, will wait for the garbage collector to delete the pods Feb 4 13:17:46.926: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.673242ms Feb 4 13:17:47.626: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 700.211287ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:18:51.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1872" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:102.059 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":311,"completed":110,"skipped":1786,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:18:51.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 4 13:18:51.470: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087354 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:18:51.470: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087354 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 4 13:19:01.481: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087414 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:19:01.482: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087414 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 4 13:19:11.491: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087499 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:19:11.491: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087499 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 4 13:19:21.501: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087541 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:19:21.501: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9273 9de81e55-7b10-41df-936e-67982d421fbc 2087541 0 2021-02-04 13:18:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 4 13:19:31.513: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9273 09b1989f-aa78-47cc-a90f-02d3360caede 2087561 0 2021-02-04 13:19:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:19:31.513: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9273 09b1989f-aa78-47cc-a90f-02d3360caede 2087561 0 2021-02-04 13:19:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 4 13:19:41.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9273 09b1989f-aa78-47cc-a90f-02d3360caede 2087581 0 2021-02-04 13:19:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 13:19:41.523: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9273 09b1989f-aa78-47cc-a90f-02d3360caede 2087581 0 2021-02-04 13:19:31 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2021-02-04 13:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:19:51.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9273" for this suite. • [SLOW TEST:60.177 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":311,"completed":111,"skipped":1797,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:19:51.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 13:19:52.256: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 13:19:54.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:19:56.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041592, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 13:19:59.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:20:00.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8954" for this suite. STEP: Destroying namespace "webhook-8954-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.558 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":311,"completed":112,"skipped":1871,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:20:00.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Feb 4 13:20:00.292: INFO: >>> kubeConfig: /root/.kube/config Feb 4 13:20:03.847: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:20:16.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5810" for this suite. • [SLOW TEST:16.058 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":311,"completed":113,"skipped":1874,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:20:16.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-8064 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 4 13:20:16.876: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 4 13:20:17.124: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:20:19.355: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:20:21.132: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:20:23.128: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:20:25.130: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:20:27.128: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:20:29.131: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:20:31.149: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:20:33.129: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 4 13:20:33.134: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 4 13:20:37.158: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 4 13:20:37.158: INFO: Breadth first check of 10.244.2.164 on host 172.18.0.14... Feb 4 13:20:37.160: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.16:9080/dial?request=hostname&protocol=udp&host=10.244.2.164&port=8081&tries=1'] Namespace:pod-network-test-8064 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:20:37.160: INFO: >>> kubeConfig: /root/.kube/config I0204 13:20:37.194864 7 log.go:181] (0xc005392210) (0xc0031aa960) Create stream I0204 13:20:37.194897 7 log.go:181] (0xc005392210) (0xc0031aa960) Stream added, broadcasting: 1 I0204 13:20:37.196682 7 log.go:181] (0xc005392210) Reply frame received for 1 I0204 13:20:37.196726 7 log.go:181] (0xc005392210) (0xc002ee17c0) Create stream I0204 13:20:37.196741 7 log.go:181] (0xc005392210) (0xc002ee17c0) Stream added, broadcasting: 3 I0204 13:20:37.197684 7 log.go:181] (0xc005392210) Reply frame received for 3 I0204 13:20:37.197731 7 log.go:181] (0xc005392210) (0xc0012510e0) Create stream I0204 13:20:37.197746 7 log.go:181] (0xc005392210) (0xc0012510e0) Stream added, broadcasting: 5 I0204 13:20:37.198517 7 log.go:181] (0xc005392210) Reply frame received for 5 I0204 13:20:37.290562 7 log.go:181] (0xc005392210) Data frame received for 3 I0204 13:20:37.290589 7 log.go:181] (0xc002ee17c0) (3) Data frame handling I0204 13:20:37.290614 7 log.go:181] (0xc002ee17c0) (3) Data frame sent I0204 13:20:37.291303 7 log.go:181] (0xc005392210) Data frame received for 5 I0204 13:20:37.291353 7 log.go:181] (0xc0012510e0) (5) Data frame handling I0204 13:20:37.291401 7 log.go:181] (0xc005392210) Data frame received for 3 I0204 13:20:37.291421 7 log.go:181] (0xc002ee17c0) (3) Data frame handling I0204 13:20:37.293036 7 log.go:181] (0xc005392210) Data frame received for 1 I0204 13:20:37.293068 7 log.go:181] (0xc0031aa960) (1) Data frame handling I0204 13:20:37.293092 7 log.go:181] (0xc0031aa960) (1) Data frame sent I0204 13:20:37.293117 7 log.go:181] (0xc005392210) (0xc0031aa960) Stream removed, broadcasting: 1 I0204 13:20:37.293142 7 log.go:181] (0xc005392210) Go away received I0204 13:20:37.293239 7 log.go:181] (0xc005392210) (0xc0031aa960) Stream removed, broadcasting: 1 I0204 13:20:37.293273 7 log.go:181] (0xc005392210) (0xc002ee17c0) Stream removed, broadcasting: 3 I0204 13:20:37.293294 7 log.go:181] (0xc005392210) (0xc0012510e0) Stream removed, broadcasting: 5 Feb 4 13:20:37.293: INFO: Waiting for responses: map[] Feb 4 13:20:37.293: INFO: reached 10.244.2.164 after 0/1 tries Feb 4 13:20:37.293: INFO: Breadth first check of 10.244.1.14 on host 172.18.0.16... Feb 4 13:20:37.296: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.16:9080/dial?request=hostname&protocol=udp&host=10.244.1.14&port=8081&tries=1'] Namespace:pod-network-test-8064 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:20:37.296: INFO: >>> kubeConfig: /root/.kube/config I0204 13:20:37.326867 7 log.go:181] (0xc005392840) (0xc0031aac80) Create stream I0204 13:20:37.326897 7 log.go:181] (0xc005392840) (0xc0031aac80) Stream added, broadcasting: 1 I0204 13:20:37.328548 7 log.go:181] (0xc005392840) Reply frame received for 1 I0204 13:20:37.328594 7 log.go:181] (0xc005392840) (0xc001420c80) Create stream I0204 13:20:37.328614 7 log.go:181] (0xc005392840) (0xc001420c80) Stream added, broadcasting: 3 I0204 13:20:37.329706 7 log.go:181] (0xc005392840) Reply frame received for 3 I0204 13:20:37.329745 7 log.go:181] (0xc005392840) (0xc002ee1860) Create stream I0204 13:20:37.329765 7 log.go:181] (0xc005392840) (0xc002ee1860) Stream added, broadcasting: 5 I0204 13:20:37.330945 7 log.go:181] (0xc005392840) Reply frame received for 5 I0204 13:20:37.420167 7 log.go:181] (0xc005392840) Data frame received for 3 I0204 13:20:37.420207 7 log.go:181] (0xc001420c80) (3) Data frame handling I0204 13:20:37.420237 7 log.go:181] (0xc001420c80) (3) Data frame sent I0204 13:20:37.420442 7 log.go:181] (0xc005392840) Data frame received for 5 I0204 13:20:37.420478 7 log.go:181] (0xc002ee1860) (5) Data frame handling I0204 13:20:37.420551 7 log.go:181] (0xc005392840) Data frame received for 3 I0204 13:20:37.420573 7 log.go:181] (0xc001420c80) (3) Data frame handling I0204 13:20:37.422084 7 log.go:181] (0xc005392840) Data frame received for 1 I0204 13:20:37.422106 7 log.go:181] (0xc0031aac80) (1) Data frame handling I0204 13:20:37.422124 7 log.go:181] (0xc0031aac80) (1) Data frame sent I0204 13:20:37.422136 7 log.go:181] (0xc005392840) (0xc0031aac80) Stream removed, broadcasting: 1 I0204 13:20:37.422168 7 log.go:181] (0xc005392840) Go away received I0204 13:20:37.422223 7 log.go:181] (0xc005392840) (0xc0031aac80) Stream removed, broadcasting: 1 I0204 13:20:37.422239 7 log.go:181] (0xc005392840) (0xc001420c80) Stream removed, broadcasting: 3 I0204 13:20:37.422250 7 log.go:181] (0xc005392840) (0xc002ee1860) Stream removed, broadcasting: 5 Feb 4 13:20:37.422: INFO: Waiting for responses: map[] Feb 4 13:20:37.422: INFO: reached 10.244.1.14 after 0/1 tries Feb 4 13:20:37.422: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:20:37.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8064" for this suite. • [SLOW TEST:21.275 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":311,"completed":114,"skipped":1895,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:20:37.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 4 13:20:42.149: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6eff2b49-62b2-42dc-86e0-742235f76781" Feb 4 13:20:42.149: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6eff2b49-62b2-42dc-86e0-742235f76781" in namespace "pods-4386" to be "terminated due to deadline exceeded" Feb 4 13:20:42.191: INFO: Pod "pod-update-activedeadlineseconds-6eff2b49-62b2-42dc-86e0-742235f76781": Phase="Running", Reason="", readiness=true. Elapsed: 42.205623ms Feb 4 13:20:44.300: INFO: Pod "pod-update-activedeadlineseconds-6eff2b49-62b2-42dc-86e0-742235f76781": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.15063582s Feb 4 13:20:44.300: INFO: Pod "pod-update-activedeadlineseconds-6eff2b49-62b2-42dc-86e0-742235f76781" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:20:44.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4386" for this suite. • [SLOW TEST:6.954 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":311,"completed":115,"skipped":1910,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:20:44.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3149 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3149 I0204 13:20:46.688940 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3149, replica count: 2 I0204 13:20:49.739398 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:20:52.739611 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 13:20:52.739: INFO: Creating new exec pod Feb 4 13:20:57.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3149 exec execpodcx5h9 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 4 13:20:57.978: INFO: stderr: "I0204 13:20:57.909246 2489 log.go:181] (0xc00003abb0) (0xc00074e280) Create stream\nI0204 13:20:57.909319 2489 log.go:181] (0xc00003abb0) (0xc00074e280) Stream added, broadcasting: 1\nI0204 13:20:57.911423 2489 log.go:181] (0xc00003abb0) Reply frame received for 1\nI0204 13:20:57.911487 2489 log.go:181] (0xc00003abb0) (0xc000322780) Create stream\nI0204 13:20:57.911518 2489 log.go:181] (0xc00003abb0) (0xc000322780) Stream added, broadcasting: 3\nI0204 13:20:57.912666 2489 log.go:181] (0xc00003abb0) Reply frame received for 3\nI0204 13:20:57.912743 2489 log.go:181] (0xc00003abb0) (0xc0003c7220) Create stream\nI0204 13:20:57.912772 2489 log.go:181] (0xc00003abb0) (0xc0003c7220) Stream added, broadcasting: 5\nI0204 13:20:57.913870 2489 log.go:181] (0xc00003abb0) Reply frame received for 5\nI0204 13:20:57.971443 2489 log.go:181] (0xc00003abb0) Data frame received for 3\nI0204 13:20:57.971473 2489 log.go:181] (0xc000322780) (3) Data frame handling\nI0204 13:20:57.971548 2489 log.go:181] (0xc00003abb0) Data frame received for 5\nI0204 13:20:57.971591 2489 log.go:181] (0xc0003c7220) (5) Data frame handling\nI0204 13:20:57.971631 2489 log.go:181] (0xc0003c7220) (5) Data frame sent\nI0204 13:20:57.971654 2489 log.go:181] (0xc00003abb0) Data frame received for 5\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0204 13:20:57.971681 2489 log.go:181] (0xc0003c7220) (5) Data frame handling\nI0204 13:20:57.972623 2489 log.go:181] (0xc00003abb0) Data frame received for 1\nI0204 13:20:57.972643 2489 log.go:181] (0xc00074e280) (1) Data frame handling\nI0204 13:20:57.972658 2489 log.go:181] (0xc00074e280) (1) Data frame sent\nI0204 13:20:57.972903 2489 log.go:181] (0xc00003abb0) (0xc00074e280) Stream removed, broadcasting: 1\nI0204 13:20:57.973047 2489 log.go:181] (0xc00003abb0) Go away received\nI0204 13:20:57.973236 2489 log.go:181] (0xc00003abb0) (0xc00074e280) Stream removed, broadcasting: 1\nI0204 13:20:57.973250 2489 log.go:181] (0xc00003abb0) (0xc000322780) Stream removed, broadcasting: 3\nI0204 13:20:57.973255 2489 log.go:181] (0xc00003abb0) (0xc0003c7220) Stream removed, broadcasting: 5\n" Feb 4 13:20:57.978: INFO: stdout: "" Feb 4 13:20:57.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3149 exec execpodcx5h9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.30.215 80' Feb 4 13:20:58.213: INFO: stderr: "I0204 13:20:58.116805 2508 log.go:181] (0xc000141080) (0xc00079be00) Create stream\nI0204 13:20:58.116967 2508 log.go:181] (0xc000141080) (0xc00079be00) Stream added, broadcasting: 1\nI0204 13:20:58.119148 2508 log.go:181] (0xc000141080) Reply frame received for 1\nI0204 13:20:58.119192 2508 log.go:181] (0xc000141080) (0xc000786b40) Create stream\nI0204 13:20:58.119207 2508 log.go:181] (0xc000141080) (0xc000786b40) Stream added, broadcasting: 3\nI0204 13:20:58.120220 2508 log.go:181] (0xc000141080) Reply frame received for 3\nI0204 13:20:58.120265 2508 log.go:181] (0xc000141080) (0xc000441220) Create stream\nI0204 13:20:58.120279 2508 log.go:181] (0xc000141080) (0xc000441220) Stream added, broadcasting: 5\nI0204 13:20:58.121417 2508 log.go:181] (0xc000141080) Reply frame received for 5\nI0204 13:20:58.204796 2508 log.go:181] (0xc000141080) Data frame received for 3\nI0204 13:20:58.204993 2508 log.go:181] (0xc000786b40) (3) Data frame handling\nI0204 13:20:58.205111 2508 log.go:181] (0xc000141080) Data frame received for 5\nI0204 13:20:58.205148 2508 log.go:181] (0xc000441220) (5) Data frame handling\nI0204 13:20:58.205169 2508 log.go:181] (0xc000441220) (5) Data frame sent\nI0204 13:20:58.205185 2508 log.go:181] (0xc000141080) Data frame received for 5\nI0204 13:20:58.205203 2508 log.go:181] (0xc000441220) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.30.215 80\nConnection to 10.96.30.215 80 port [tcp/http] succeeded!\nI0204 13:20:58.206532 2508 log.go:181] (0xc000141080) Data frame received for 1\nI0204 13:20:58.206571 2508 log.go:181] (0xc00079be00) (1) Data frame handling\nI0204 13:20:58.206603 2508 log.go:181] (0xc00079be00) (1) Data frame sent\nI0204 13:20:58.206631 2508 log.go:181] (0xc000141080) (0xc00079be00) Stream removed, broadcasting: 1\nI0204 13:20:58.206687 2508 log.go:181] (0xc000141080) Go away received\nI0204 13:20:58.207090 2508 log.go:181] (0xc000141080) (0xc00079be00) Stream removed, broadcasting: 1\nI0204 13:20:58.207117 2508 log.go:181] (0xc000141080) (0xc000786b40) Stream removed, broadcasting: 3\nI0204 13:20:58.207134 2508 log.go:181] (0xc000141080) (0xc000441220) Stream removed, broadcasting: 5\n" Feb 4 13:20:58.213: INFO: stdout: "" Feb 4 13:20:58.213: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3149 exec execpodcx5h9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31348' Feb 4 13:20:58.428: INFO: stderr: "I0204 13:20:58.353099 2526 log.go:181] (0xc00077a000) (0xc0004826e0) Create stream\nI0204 13:20:58.353186 2526 log.go:181] (0xc00077a000) (0xc0004826e0) Stream added, broadcasting: 1\nI0204 13:20:58.355138 2526 log.go:181] (0xc00077a000) Reply frame received for 1\nI0204 13:20:58.355189 2526 log.go:181] (0xc00077a000) (0xc0006440a0) Create stream\nI0204 13:20:58.355211 2526 log.go:181] (0xc00077a000) (0xc0006440a0) Stream added, broadcasting: 3\nI0204 13:20:58.356079 2526 log.go:181] (0xc00077a000) Reply frame received for 3\nI0204 13:20:58.356123 2526 log.go:181] (0xc00077a000) (0xc0003b5360) Create stream\nI0204 13:20:58.356134 2526 log.go:181] (0xc00077a000) (0xc0003b5360) Stream added, broadcasting: 5\nI0204 13:20:58.357195 2526 log.go:181] (0xc00077a000) Reply frame received for 5\nI0204 13:20:58.419297 2526 log.go:181] (0xc00077a000) Data frame received for 5\nI0204 13:20:58.419336 2526 log.go:181] (0xc0003b5360) (5) Data frame handling\nI0204 13:20:58.419361 2526 log.go:181] (0xc0003b5360) (5) Data frame sent\nI0204 13:20:58.419380 2526 log.go:181] (0xc00077a000) Data frame received for 5\nI0204 13:20:58.419402 2526 log.go:181] (0xc0003b5360) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31348\nConnection to 172.18.0.14 31348 port [tcp/*] succeeded!\nI0204 13:20:58.419485 2526 log.go:181] (0xc00077a000) Data frame received for 3\nI0204 13:20:58.419516 2526 log.go:181] (0xc0006440a0) (3) Data frame handling\nI0204 13:20:58.420704 2526 log.go:181] (0xc00077a000) Data frame received for 1\nI0204 13:20:58.420750 2526 log.go:181] (0xc0004826e0) (1) Data frame handling\nI0204 13:20:58.420776 2526 log.go:181] (0xc0004826e0) (1) Data frame sent\nI0204 13:20:58.420798 2526 log.go:181] (0xc00077a000) (0xc0004826e0) Stream removed, broadcasting: 1\nI0204 13:20:58.420963 2526 log.go:181] (0xc00077a000) Go away received\nI0204 13:20:58.421540 2526 log.go:181] (0xc00077a000) (0xc0004826e0) Stream removed, broadcasting: 1\nI0204 13:20:58.421565 2526 log.go:181] (0xc00077a000) (0xc0006440a0) Stream removed, broadcasting: 3\nI0204 13:20:58.421576 2526 log.go:181] (0xc00077a000) (0xc0003b5360) Stream removed, broadcasting: 5\n" Feb 4 13:20:58.428: INFO: stdout: "" Feb 4 13:20:58.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-3149 exec execpodcx5h9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.16 31348' Feb 4 13:20:58.640: INFO: stderr: "I0204 13:20:58.565031 2544 log.go:181] (0xc0001f0370) (0xc0006ac0a0) Create stream\nI0204 13:20:58.565099 2544 log.go:181] (0xc0001f0370) (0xc0006ac0a0) Stream added, broadcasting: 1\nI0204 13:20:58.566989 2544 log.go:181] (0xc0001f0370) Reply frame received for 1\nI0204 13:20:58.567029 2544 log.go:181] (0xc0001f0370) (0xc0000ca3c0) Create stream\nI0204 13:20:58.567038 2544 log.go:181] (0xc0001f0370) (0xc0000ca3c0) Stream added, broadcasting: 3\nI0204 13:20:58.568133 2544 log.go:181] (0xc0001f0370) Reply frame received for 3\nI0204 13:20:58.568174 2544 log.go:181] (0xc0001f0370) (0xc0003b8dc0) Create stream\nI0204 13:20:58.568187 2544 log.go:181] (0xc0001f0370) (0xc0003b8dc0) Stream added, broadcasting: 5\nI0204 13:20:58.569559 2544 log.go:181] (0xc0001f0370) Reply frame received for 5\nI0204 13:20:58.632624 2544 log.go:181] (0xc0001f0370) Data frame received for 5\nI0204 13:20:58.632664 2544 log.go:181] (0xc0003b8dc0) (5) Data frame handling\nI0204 13:20:58.632676 2544 log.go:181] (0xc0003b8dc0) (5) Data frame sent\nI0204 13:20:58.632683 2544 log.go:181] (0xc0001f0370) Data frame received for 5\nI0204 13:20:58.632690 2544 log.go:181] (0xc0003b8dc0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.16 31348\nConnection to 172.18.0.16 31348 port [tcp/*] succeeded!\nI0204 13:20:58.632740 2544 log.go:181] (0xc0001f0370) Data frame received for 3\nI0204 13:20:58.632749 2544 log.go:181] (0xc0000ca3c0) (3) Data frame handling\nI0204 13:20:58.634591 2544 log.go:181] (0xc0001f0370) Data frame received for 1\nI0204 13:20:58.634624 2544 log.go:181] (0xc0006ac0a0) (1) Data frame handling\nI0204 13:20:58.634639 2544 log.go:181] (0xc0006ac0a0) (1) Data frame sent\nI0204 13:20:58.634653 2544 log.go:181] (0xc0001f0370) (0xc0006ac0a0) Stream removed, broadcasting: 1\nI0204 13:20:58.634674 2544 log.go:181] (0xc0001f0370) Go away received\nI0204 13:20:58.635023 2544 log.go:181] (0xc0001f0370) (0xc0006ac0a0) Stream removed, broadcasting: 1\nI0204 13:20:58.635049 2544 log.go:181] (0xc0001f0370) (0xc0000ca3c0) Stream removed, broadcasting: 3\nI0204 13:20:58.635056 2544 log.go:181] (0xc0001f0370) (0xc0003b8dc0) Stream removed, broadcasting: 5\n" Feb 4 13:20:58.640: INFO: stdout: "" Feb 4 13:20:58.640: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:20:58.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3149" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:14.321 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":311,"completed":116,"skipped":1924,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:20:58.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-a3e44b6d-b686-4216-92ca-4abc0e694a2d STEP: Creating a pod to test consume secrets Feb 4 13:20:58.823: INFO: Waiting up to 5m0s for pod "pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988" in namespace "secrets-5988" to be "Succeeded or Failed" Feb 4 13:20:58.862: INFO: Pod "pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988": Phase="Pending", Reason="", readiness=false. Elapsed: 39.556008ms Feb 4 13:21:00.917: INFO: Pod "pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093964167s Feb 4 13:21:02.922: INFO: Pod "pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098915528s STEP: Saw pod success Feb 4 13:21:02.922: INFO: Pod "pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988" satisfied condition "Succeeded or Failed" Feb 4 13:21:02.925: INFO: Trying to get logs from node latest-worker pod pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988 container secret-volume-test: STEP: delete the pod Feb 4 13:21:02.990: INFO: Waiting for pod pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988 to disappear Feb 4 13:21:03.013: INFO: Pod pod-secrets-6a81affb-a34c-45ac-ab9e-d31be8676988 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:21:03.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5988" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":117,"skipped":1926,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:21:03.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod liveness-295c3d4e-a204-4cde-b0d5-967d400b60dd in namespace container-probe-4351 Feb 4 13:21:07.142: INFO: Started pod liveness-295c3d4e-a204-4cde-b0d5-967d400b60dd in namespace container-probe-4351 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 13:21:07.145: INFO: Initial restart count of pod liveness-295c3d4e-a204-4cde-b0d5-967d400b60dd is 0 Feb 4 13:21:27.304: INFO: Restart count of pod container-probe-4351/liveness-295c3d4e-a204-4cde-b0d5-967d400b60dd is now 1 (20.15887304s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:21:27.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4351" for this suite. • [SLOW TEST:24.375 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":311,"completed":118,"skipped":1961,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:21:27.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 4 13:21:27.924: INFO: Waiting up to 1m0s for all nodes to be ready Feb 4 13:22:27.952: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create pods that use 2/3 of node resources. Feb 4 13:22:27.993: INFO: Created pod: pod0-sched-preemption-low-priority Feb 4 13:22:28.067: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:22:56.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-124" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:88.973 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":311,"completed":119,"skipped":1966,"failed":0} [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:22:56.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Feb 4 13:23:01.564: INFO: Successfully updated pod "annotationupdatea53e60a5-6e4e-41af-bc54-b8d787edb6a8" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:23:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7466" for this suite. • [SLOW TEST:9.252 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":311,"completed":120,"skipped":1966,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:23:05.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:23:40.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-776" for this suite. • [SLOW TEST:34.907 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":311,"completed":121,"skipped":1978,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:23:40.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-map-9151b7b4-2159-48e4-8a7a-bbe77a792692 STEP: Creating a pod to test consume configMaps Feb 4 13:23:40.656: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242" in namespace "projected-6741" to be "Succeeded or Failed" Feb 4 13:23:40.687: INFO: Pod "pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242": Phase="Pending", Reason="", readiness=false. Elapsed: 31.197173ms Feb 4 13:23:42.691: INFO: Pod "pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034699377s Feb 4 13:23:44.695: INFO: Pod "pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038743455s STEP: Saw pod success Feb 4 13:23:44.695: INFO: Pod "pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242" satisfied condition "Succeeded or Failed" Feb 4 13:23:44.698: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242 container agnhost-container: STEP: delete the pod Feb 4 13:23:44.751: INFO: Waiting for pod pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242 to disappear Feb 4 13:23:44.762: INFO: Pod pod-projected-configmaps-1e953366-91d5-486e-9569-d4ede3eab242 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:23:44.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6741" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":122,"skipped":1999,"failed":0} SSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:23:44.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of events Feb 4 13:23:44.841: INFO: created test-event-1 Feb 4 13:23:45.097: INFO: created test-event-2 Feb 4 13:23:45.101: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Feb 4 13:23:45.217: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Feb 4 13:23:45.261: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:23:45.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5754" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":311,"completed":123,"skipped":2003,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:23:45.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 4 13:23:50.504: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:23:50.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2013" for this suite. • [SLOW TEST:5.288 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":124,"skipped":2021,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:23:50.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0204 13:23:51.862804 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 4 13:24:53.881: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:24:53.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6127" for this suite. • [SLOW TEST:63.329 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":311,"completed":125,"skipped":2022,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:24:53.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Feb 4 13:25:00.597: INFO: Successfully updated pod "adopt-release-dk5x4" STEP: Checking that the Job readopts the Pod Feb 4 13:25:00.597: INFO: Waiting up to 15m0s for pod "adopt-release-dk5x4" in namespace "job-8075" to be "adopted" Feb 4 13:25:00.660: INFO: Pod "adopt-release-dk5x4": Phase="Running", Reason="", readiness=true. Elapsed: 62.399095ms Feb 4 13:25:02.664: INFO: Pod "adopt-release-dk5x4": Phase="Running", Reason="", readiness=true. Elapsed: 2.066071422s Feb 4 13:25:02.664: INFO: Pod "adopt-release-dk5x4" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Feb 4 13:25:03.175: INFO: Successfully updated pod "adopt-release-dk5x4" STEP: Checking that the Job releases the Pod Feb 4 13:25:03.175: INFO: Waiting up to 15m0s for pod "adopt-release-dk5x4" in namespace "job-8075" to be "released" Feb 4 13:25:03.290: INFO: Pod "adopt-release-dk5x4": Phase="Running", Reason="", readiness=true. Elapsed: 114.3104ms Feb 4 13:25:03.290: INFO: Pod "adopt-release-dk5x4" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:25:03.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8075" for this suite. • [SLOW TEST:9.482 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":311,"completed":126,"skipped":2028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:25:03.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:25:03.482: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:25:07.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8136" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":311,"completed":127,"skipped":2055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:25:07.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 STEP: creating an pod Feb 4 13:25:07.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.26 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 4 13:25:11.293: INFO: stderr: "" Feb 4 13:25:11.293: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Waiting for log generator to start. Feb 4 13:25:11.293: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 4 13:25:11.293: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7273" to be "running and ready, or succeeded" Feb 4 13:25:11.296: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.998265ms Feb 4 13:25:13.391: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097930204s Feb 4 13:25:15.395: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.101874291s Feb 4 13:25:15.395: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 4 13:25:15.395: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Feb 4 13:25:15.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 logs logs-generator logs-generator' Feb 4 13:25:15.521: INFO: stderr: "" Feb 4 13:25:15.521: INFO: stdout: "I0204 13:25:14.367912 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/ff6h 348\nI0204 13:25:14.568044 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/s6xm 501\nI0204 13:25:14.768043 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/wlh6 295\nI0204 13:25:14.968028 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/vsm9 268\nI0204 13:25:15.168116 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/556 424\nI0204 13:25:15.368069 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rrh 415\n" STEP: limiting log lines Feb 4 13:25:15.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 logs logs-generator logs-generator --tail=1' Feb 4 13:25:15.663: INFO: stderr: "" Feb 4 13:25:15.663: INFO: stdout: "I0204 13:25:15.568049 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/dh2d 246\n" Feb 4 13:25:15.663: INFO: got output "I0204 13:25:15.568049 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/dh2d 246\n" STEP: limiting log bytes Feb 4 13:25:15.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 logs logs-generator logs-generator --limit-bytes=1' Feb 4 13:25:15.781: INFO: stderr: "" Feb 4 13:25:15.781: INFO: stdout: "I" Feb 4 13:25:15.781: INFO: got output "I" STEP: exposing timestamps Feb 4 13:25:15.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 logs logs-generator logs-generator --tail=1 --timestamps' Feb 4 13:25:15.897: INFO: stderr: "" Feb 4 13:25:15.897: INFO: stdout: "2021-02-04T13:25:15.768254762Z I0204 13:25:15.768084 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/57b 380\n" Feb 4 13:25:15.897: INFO: got output "2021-02-04T13:25:15.768254762Z I0204 13:25:15.768084 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/57b 380\n" STEP: restricting to a time range Feb 4 13:25:18.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 logs logs-generator logs-generator --since=1s' Feb 4 13:25:18.506: INFO: stderr: "" Feb 4 13:25:18.506: INFO: stdout: "I0204 13:25:17.568074 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/rrtj 431\nI0204 13:25:17.768152 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/nw5w 512\nI0204 13:25:17.968068 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/rtn 352\nI0204 13:25:18.168095 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/mkg 518\nI0204 13:25:18.368103 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/ml9j 465\n" Feb 4 13:25:18.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 logs logs-generator logs-generator --since=24h' Feb 4 13:25:18.612: INFO: stderr: "" Feb 4 13:25:18.612: INFO: stdout: "I0204 13:25:14.367912 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/ff6h 348\nI0204 13:25:14.568044 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/s6xm 501\nI0204 13:25:14.768043 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/wlh6 295\nI0204 13:25:14.968028 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/vsm9 268\nI0204 13:25:15.168116 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/556 424\nI0204 13:25:15.368069 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rrh 415\nI0204 13:25:15.568049 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/dh2d 246\nI0204 13:25:15.768084 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/57b 380\nI0204 13:25:15.968059 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/bjfx 366\nI0204 13:25:16.168124 1 logs_generator.go:76] 9 POST /api/v1/namespaces/default/pods/xkf 339\nI0204 13:25:16.368061 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/bwlz 300\nI0204 13:25:16.568052 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/w2rv 575\nI0204 13:25:16.768075 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/wlqc 576\nI0204 13:25:16.968056 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/wql 492\nI0204 13:25:17.168077 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/tjjt 531\nI0204 13:25:17.368131 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/h5t 482\nI0204 13:25:17.568074 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/rrtj 431\nI0204 13:25:17.768152 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/nw5w 512\nI0204 13:25:17.968068 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/rtn 352\nI0204 13:25:18.168095 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/mkg 518\nI0204 13:25:18.368103 1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/ml9j 465\nI0204 13:25:18.568075 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/tcd7 244\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Feb 4 13:25:18.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7273 delete pod logs-generator' Feb 4 13:25:51.249: INFO: stderr: "" Feb 4 13:25:51.249: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:25:51.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7273" for this suite. • [SLOW TEST:43.688 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":311,"completed":128,"skipped":2085,"failed":0} [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:25:51.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:25:55.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-327" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":311,"completed":129,"skipped":2085,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:25:55.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap that has name configmap-test-emptyKey-676f1801-a3fe-4239-a086-d800c8e6fa73 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:25:55.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3457" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":311,"completed":130,"skipped":2091,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:25:55.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:04.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-880" for this suite. • [SLOW TEST:9.121 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":311,"completed":131,"skipped":2097,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:04.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:04.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3723" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":311,"completed":132,"skipped":2115,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:04.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Request ServerVersion STEP: Confirm major version Feb 4 13:26:05.103: INFO: Major version: 1 STEP: Confirm minor version Feb 4 13:26:05.103: INFO: cleanMinorVersion: 21 Feb 4 13:26:05.103: INFO: Minor version: 21+ [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:05.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-9431" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":311,"completed":133,"skipped":2129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:05.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 13:26:05.618: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 13:26:07.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:26:09.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748041965, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 13:26:12.939: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Feb 4 13:26:12.965: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:13.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3386" for this suite. STEP: Destroying namespace "webhook-3386-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.621 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":311,"completed":134,"skipped":2154,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:13.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 4 13:26:14.299: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-7265 35f71312-c8e4-46a9-ac0c-b21a62fc2bc5 2090245 0 2021-02-04 13:26:14 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2021-02-04 13:26:14 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fck6v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fck6v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fck6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:26:14.530: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:26:16.884: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:26:18.544: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Feb 4 13:26:18.544: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7265 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:26:18.545: INFO: >>> kubeConfig: /root/.kube/config I0204 13:26:18.577643 7 log.go:181] (0xc002ef4a50) (0xc003914640) Create stream I0204 13:26:18.577724 7 log.go:181] (0xc002ef4a50) (0xc003914640) Stream added, broadcasting: 1 I0204 13:26:18.579789 7 log.go:181] (0xc002ef4a50) Reply frame received for 1 I0204 13:26:18.579837 7 log.go:181] (0xc002ef4a50) (0xc00344dea0) Create stream I0204 13:26:18.579856 7 log.go:181] (0xc002ef4a50) (0xc00344dea0) Stream added, broadcasting: 3 I0204 13:26:18.581005 7 log.go:181] (0xc002ef4a50) Reply frame received for 3 I0204 13:26:18.581055 7 log.go:181] (0xc002ef4a50) (0xc0031ab360) Create stream I0204 13:26:18.581075 7 log.go:181] (0xc002ef4a50) (0xc0031ab360) Stream added, broadcasting: 5 I0204 13:26:18.582039 7 log.go:181] (0xc002ef4a50) Reply frame received for 5 I0204 13:26:18.684667 7 log.go:181] (0xc002ef4a50) Data frame received for 3 I0204 13:26:18.684704 7 log.go:181] (0xc00344dea0) (3) Data frame handling I0204 13:26:18.684726 7 log.go:181] (0xc00344dea0) (3) Data frame sent I0204 13:26:18.686175 7 log.go:181] (0xc002ef4a50) Data frame received for 5 I0204 13:26:18.686195 7 log.go:181] (0xc0031ab360) (5) Data frame handling I0204 13:26:18.686289 7 log.go:181] (0xc002ef4a50) Data frame received for 3 I0204 13:26:18.686299 7 log.go:181] (0xc00344dea0) (3) Data frame handling I0204 13:26:18.687822 7 log.go:181] (0xc002ef4a50) Data frame received for 1 I0204 13:26:18.687837 7 log.go:181] (0xc003914640) (1) Data frame handling I0204 13:26:18.687861 7 log.go:181] (0xc003914640) (1) Data frame sent I0204 13:26:18.687889 7 log.go:181] (0xc002ef4a50) (0xc003914640) Stream removed, broadcasting: 1 I0204 13:26:18.687921 7 log.go:181] (0xc002ef4a50) Go away received I0204 13:26:18.687951 7 log.go:181] (0xc002ef4a50) (0xc003914640) Stream removed, broadcasting: 1 I0204 13:26:18.687972 7 log.go:181] (0xc002ef4a50) (0xc00344dea0) Stream removed, broadcasting: 3 I0204 13:26:18.687982 7 log.go:181] (0xc002ef4a50) (0xc0031ab360) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 4 13:26:18.688: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7265 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:26:18.688: INFO: >>> kubeConfig: /root/.kube/config I0204 13:26:18.720703 7 log.go:181] (0xc002804000) (0xc0003306e0) Create stream I0204 13:26:18.720737 7 log.go:181] (0xc002804000) (0xc0003306e0) Stream added, broadcasting: 1 I0204 13:26:18.722670 7 log.go:181] (0xc002804000) Reply frame received for 1 I0204 13:26:18.722708 7 log.go:181] (0xc002804000) (0xc0031aa140) Create stream I0204 13:26:18.722722 7 log.go:181] (0xc002804000) (0xc0031aa140) Stream added, broadcasting: 3 I0204 13:26:18.723737 7 log.go:181] (0xc002804000) Reply frame received for 3 I0204 13:26:18.723805 7 log.go:181] (0xc002804000) (0xc003914140) Create stream I0204 13:26:18.723831 7 log.go:181] (0xc002804000) (0xc003914140) Stream added, broadcasting: 5 I0204 13:26:18.724964 7 log.go:181] (0xc002804000) Reply frame received for 5 I0204 13:26:18.799015 7 log.go:181] (0xc002804000) Data frame received for 3 I0204 13:26:18.799048 7 log.go:181] (0xc0031aa140) (3) Data frame handling I0204 13:26:18.799062 7 log.go:181] (0xc0031aa140) (3) Data frame sent I0204 13:26:18.801925 7 log.go:181] (0xc002804000) Data frame received for 3 I0204 13:26:18.801943 7 log.go:181] (0xc0031aa140) (3) Data frame handling I0204 13:26:18.802194 7 log.go:181] (0xc002804000) Data frame received for 5 I0204 13:26:18.802210 7 log.go:181] (0xc003914140) (5) Data frame handling I0204 13:26:18.803609 7 log.go:181] (0xc002804000) Data frame received for 1 I0204 13:26:18.803621 7 log.go:181] (0xc0003306e0) (1) Data frame handling I0204 13:26:18.803633 7 log.go:181] (0xc0003306e0) (1) Data frame sent I0204 13:26:18.803691 7 log.go:181] (0xc002804000) (0xc0003306e0) Stream removed, broadcasting: 1 I0204 13:26:18.803762 7 log.go:181] (0xc002804000) (0xc0003306e0) Stream removed, broadcasting: 1 I0204 13:26:18.803771 7 log.go:181] (0xc002804000) (0xc0031aa140) Stream removed, broadcasting: 3 I0204 13:26:18.803868 7 log.go:181] (0xc002804000) (0xc003914140) Stream removed, broadcasting: 5 Feb 4 13:26:18.803: INFO: Deleting pod test-dns-nameservers... I0204 13:26:18.803958 7 log.go:181] (0xc002804000) Go away received [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:18.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7265" for this suite. • [SLOW TEST:5.182 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":311,"completed":135,"skipped":2170,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:18.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 13:26:19.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7" in namespace "projected-9558" to be "Succeeded or Failed" Feb 4 13:26:19.500: INFO: Pod "downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.793941ms Feb 4 13:26:21.536: INFO: Pod "downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045078134s Feb 4 13:26:23.541: INFO: Pod "downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.049406404s Feb 4 13:26:25.545: INFO: Pod "downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054060668s STEP: Saw pod success Feb 4 13:26:25.545: INFO: Pod "downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7" satisfied condition "Succeeded or Failed" Feb 4 13:26:25.549: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7 container client-container: STEP: delete the pod Feb 4 13:26:25.595: INFO: Waiting for pod downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7 to disappear Feb 4 13:26:25.631: INFO: Pod downwardapi-volume-c230c350-61e6-45b5-a055-578b887d01e7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:25.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9558" for this suite. • [SLOW TEST:6.723 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":311,"completed":136,"skipped":2182,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:25.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 4 13:26:25.783: INFO: Waiting up to 5m0s for pod "pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440" in namespace "emptydir-4570" to be "Succeeded or Failed" Feb 4 13:26:25.787: INFO: Pod "pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917809ms Feb 4 13:26:27.793: INFO: Pod "pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01061168s Feb 4 13:26:29.799: INFO: Pod "pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016255391s STEP: Saw pod success Feb 4 13:26:29.799: INFO: Pod "pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440" satisfied condition "Succeeded or Failed" Feb 4 13:26:29.804: INFO: Trying to get logs from node latest-worker pod pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440 container test-container: STEP: delete the pod Feb 4 13:26:29.944: INFO: Waiting for pod pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440 to disappear Feb 4 13:26:30.062: INFO: Pod pod-bdd7a0d9-021c-4baf-8dc1-b609b0fd3440 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:30.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4570" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":137,"skipped":2199,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:30.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test env composition Feb 4 13:26:30.197: INFO: Waiting up to 5m0s for pod "var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a" in namespace "var-expansion-5881" to be "Succeeded or Failed" Feb 4 13:26:30.207: INFO: Pod "var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.301394ms Feb 4 13:26:32.211: INFO: Pod "var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014319686s Feb 4 13:26:34.215: INFO: Pod "var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018764608s STEP: Saw pod success Feb 4 13:26:34.215: INFO: Pod "var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a" satisfied condition "Succeeded or Failed" Feb 4 13:26:34.218: INFO: Trying to get logs from node latest-worker pod var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a container dapi-container: STEP: delete the pod Feb 4 13:26:34.268: INFO: Waiting for pod var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a to disappear Feb 4 13:26:34.302: INFO: Pod var-expansion-3b5bdc54-04e5-4fe4-85fe-001de3b04d6a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:34.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5881" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":311,"completed":138,"skipped":2200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:34.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-map-951170e7-84d9-4d7b-8a10-f684a5c539a7 STEP: Creating a pod to test consume secrets Feb 4 13:26:34.448: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244" in namespace "projected-4718" to be "Succeeded or Failed" Feb 4 13:26:34.472: INFO: Pod "pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244": Phase="Pending", Reason="", readiness=false. Elapsed: 23.588679ms Feb 4 13:26:36.476: INFO: Pod "pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028003074s Feb 4 13:26:38.481: INFO: Pod "pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244": Phase="Running", Reason="", readiness=true. Elapsed: 4.03278247s Feb 4 13:26:40.489: INFO: Pod "pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040551086s STEP: Saw pod success Feb 4 13:26:40.489: INFO: Pod "pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244" satisfied condition "Succeeded or Failed" Feb 4 13:26:40.491: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244 container projected-secret-volume-test: STEP: delete the pod Feb 4 13:26:40.523: INFO: Waiting for pod pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244 to disappear Feb 4 13:26:40.553: INFO: Pod pod-projected-secrets-08ae2432-29cb-4c4e-9681-7cd615f50244 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:40.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4718" for this suite. • [SLOW TEST:6.252 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":311,"completed":139,"skipped":2233,"failed":0} [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:40.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 13:26:40.642: INFO: Waiting up to 5m0s for pod "downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6" in namespace "projected-9382" to be "Succeeded or Failed" Feb 4 13:26:40.698: INFO: Pod "downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6": Phase="Pending", Reason="", readiness=false. Elapsed: 56.218581ms Feb 4 13:26:42.702: INFO: Pod "downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05994336s Feb 4 13:26:44.706: INFO: Pod "downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064330601s STEP: Saw pod success Feb 4 13:26:44.706: INFO: Pod "downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6" satisfied condition "Succeeded or Failed" Feb 4 13:26:44.709: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6 container client-container: STEP: delete the pod Feb 4 13:26:44.751: INFO: Waiting for pod downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6 to disappear Feb 4 13:26:44.765: INFO: Pod downwardapi-volume-904004e9-cc93-4da1-bea5-e145466138b6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:26:44.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9382" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":311,"completed":140,"skipped":2233,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:26:44.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod liveness-99ad9258-d015-48d9-83cd-07b4cd9be4f1 in namespace container-probe-9584 Feb 4 13:26:49.186: INFO: Started pod liveness-99ad9258-d015-48d9-83cd-07b4cd9be4f1 in namespace container-probe-9584 STEP: checking the pod's current state and verifying that restartCount is present Feb 4 13:26:49.189: INFO: Initial restart count of pod liveness-99ad9258-d015-48d9-83cd-07b4cd9be4f1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:30:50.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9584" for this suite. • [SLOW TEST:246.155 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":311,"completed":141,"skipped":2242,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:30:50.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8815.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8815.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8815.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8815.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8815.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 13:30:59.726: INFO: DNS probes using dns-8815/dns-test-094858f9-d53d-4841-9bef-593839e5776c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:30:59.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8815" for this suite. • [SLOW TEST:9.194 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":311,"completed":142,"skipped":2258,"failed":0} [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:31:00.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod with failed condition STEP: updating the pod Feb 4 13:33:01.659: INFO: Successfully updated pod "var-expansion-de05da2b-ca4f-4707-bd12-612dbef3a238" STEP: waiting for pod running STEP: deleting the pod gracefully Feb 4 13:33:05.675: INFO: Deleting pod "var-expansion-de05da2b-ca4f-4707-bd12-612dbef3a238" in namespace "var-expansion-629" Feb 4 13:33:05.682: INFO: Wait up to 5m0s for pod "var-expansion-de05da2b-ca4f-4707-bd12-612dbef3a238" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:01.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-629" for this suite. • [SLOW TEST:181.633 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":311,"completed":143,"skipped":2258,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:01.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Feb 4 13:34:01.943: INFO: Waiting up to 5m0s for pod "downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c" in namespace "downward-api-3869" to be "Succeeded or Failed" Feb 4 13:34:01.977: INFO: Pod "downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.62841ms Feb 4 13:34:03.982: INFO: Pod "downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038395998s Feb 4 13:34:05.987: INFO: Pod "downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043102743s STEP: Saw pod success Feb 4 13:34:05.987: INFO: Pod "downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c" satisfied condition "Succeeded or Failed" Feb 4 13:34:05.990: INFO: Trying to get logs from node latest-worker pod downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c container dapi-container: STEP: delete the pod Feb 4 13:34:06.048: INFO: Waiting for pod downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c to disappear Feb 4 13:34:06.077: INFO: Pod downward-api-c257c3dc-fde9-400b-8ea0-3138dfbbe46c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:06.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3869" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":311,"completed":144,"skipped":2270,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:06.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name s-test-opt-del-fc4a80ca-89f5-4b8c-b4d0-1aeb655941bb STEP: Creating secret with name s-test-opt-upd-30804934-8c8d-4f9f-8d6d-89df94195cc3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-fc4a80ca-89f5-4b8c-b4d0-1aeb655941bb STEP: Updating secret s-test-opt-upd-30804934-8c8d-4f9f-8d6d-89df94195cc3 STEP: Creating secret with name s-test-opt-create-652cf71e-4b17-4a87-af41-900d29239166 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:14.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1505" for this suite. • [SLOW TEST:8.318 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":145,"skipped":2278,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:14.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Feb 4 13:34:14.527: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Feb 4 13:34:14.531: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Feb 4 13:34:14.531: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Feb 4 13:34:14.538: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Feb 4 13:34:14.538: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Feb 4 13:34:14.598: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Feb 4 13:34:14.598: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Feb 4 13:34:22.340: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:22.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3372" for this suite. • [SLOW TEST:8.122 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":311,"completed":146,"skipped":2285,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:22.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Feb 4 13:34:23.009: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the sample API server. Feb 4 13:34:23.625: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 4 13:34:26.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:34:28.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:34:30.802: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:34:32.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042463, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:34:35.379: INFO: Waited 845.55647ms for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices Feb 4 13:34:35.443: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:36.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6445" for this suite. • [SLOW TEST:13.975 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":311,"completed":147,"skipped":2287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:36.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 4 13:34:36.701: INFO: Waiting up to 5m0s for pod "pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23" in namespace "emptydir-1192" to be "Succeeded or Failed" Feb 4 13:34:36.949: INFO: Pod "pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23": Phase="Pending", Reason="", readiness=false. Elapsed: 248.006048ms Feb 4 13:34:38.953: INFO: Pod "pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25110414s Feb 4 13:34:40.957: INFO: Pod "pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.256032863s STEP: Saw pod success Feb 4 13:34:40.958: INFO: Pod "pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23" satisfied condition "Succeeded or Failed" Feb 4 13:34:40.961: INFO: Trying to get logs from node latest-worker2 pod pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23 container test-container: STEP: delete the pod Feb 4 13:34:40.980: INFO: Waiting for pod pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23 to disappear Feb 4 13:34:40.985: INFO: Pod pod-a4d09dc8-67d1-47fd-8154-9c92c6fbfd23 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:40.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1192" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":148,"skipped":2314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:40.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-47c5da18-3f3d-4a8e-838d-d0d190f5c9f9 STEP: Creating a pod to test consume secrets Feb 4 13:34:41.140: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa" in namespace "projected-983" to be "Succeeded or Failed" Feb 4 13:34:41.203: INFO: Pod "pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa": Phase="Pending", Reason="", readiness=false. Elapsed: 63.406905ms Feb 4 13:34:43.208: INFO: Pod "pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067488031s Feb 4 13:34:45.212: INFO: Pod "pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa": Phase="Running", Reason="", readiness=true. Elapsed: 4.072056632s Feb 4 13:34:47.216: INFO: Pod "pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075683924s STEP: Saw pod success Feb 4 13:34:47.216: INFO: Pod "pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa" satisfied condition "Succeeded or Failed" Feb 4 13:34:47.219: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa container projected-secret-volume-test: STEP: delete the pod Feb 4 13:34:47.257: INFO: Waiting for pod pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa to disappear Feb 4 13:34:47.270: INFO: Pod pod-projected-secrets-9e4376e3-9cca-4c98-97fb-1e506ff8f6aa no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:47.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-983" for this suite. • [SLOW TEST:6.328 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":149,"skipped":2439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:47.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-dc4cfe72-ccde-431a-b3cb-a0b1abde5def STEP: Creating a pod to test consume secrets Feb 4 13:34:47.476: INFO: Waiting up to 5m0s for pod "pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff" in namespace "secrets-2400" to be "Succeeded or Failed" Feb 4 13:34:47.479: INFO: Pod "pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75607ms Feb 4 13:34:49.632: INFO: Pod "pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155962439s Feb 4 13:34:51.698: INFO: Pod "pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.222209536s STEP: Saw pod success Feb 4 13:34:51.698: INFO: Pod "pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff" satisfied condition "Succeeded or Failed" Feb 4 13:34:51.701: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff container secret-volume-test: STEP: delete the pod Feb 4 13:34:51.744: INFO: Waiting for pod pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff to disappear Feb 4 13:34:51.766: INFO: Pod pod-secrets-074b3cb9-2821-4d7d-9669-d7d73b978fff no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:34:51.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2400" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":150,"skipped":2492,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:34:51.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:34:51.902: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 4 13:34:51.915: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:51.917: INFO: Number of nodes with available pods: 0 Feb 4 13:34:51.917: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:34:52.923: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:52.927: INFO: Number of nodes with available pods: 0 Feb 4 13:34:52.927: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:34:54.109: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:54.112: INFO: Number of nodes with available pods: 0 Feb 4 13:34:54.112: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:34:54.923: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:54.927: INFO: Number of nodes with available pods: 0 Feb 4 13:34:54.927: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:34:55.922: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:55.925: INFO: Number of nodes with available pods: 1 Feb 4 13:34:55.925: INFO: Node latest-worker2 is running more than one daemon pod Feb 4 13:34:56.922: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:56.925: INFO: Number of nodes with available pods: 2 Feb 4 13:34:56.925: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 4 13:34:57.102: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:34:57.102: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:34:57.107: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:58.111: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:34:58.111: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:34:58.115: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:34:59.111: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:34:59.111: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:34:59.116: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:00.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:00.112: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:00.112: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:00.116: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:01.163: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:01.163: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:01.163: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:01.170: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:02.146: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:02.146: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:02.146: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:02.180: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:03.110: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:03.110: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:03.110: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:03.114: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:04.110: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:04.110: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:04.110: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:04.114: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:05.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:05.112: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:05.112: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:05.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:06.110: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:06.110: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:06.110: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:06.113: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:07.113: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:07.113: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:07.113: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:07.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:08.113: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:08.113: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:08.113: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:08.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:09.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:09.112: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:09.112: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:09.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:10.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:10.112: INFO: Wrong image for pod: daemon-set-rbwcl. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:10.112: INFO: Pod daemon-set-rbwcl is not available Feb 4 13:35:10.116: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:11.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:11.112: INFO: Pod daemon-set-wsn2n is not available Feb 4 13:35:11.116: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:12.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:12.112: INFO: Pod daemon-set-wsn2n is not available Feb 4 13:35:12.114: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:13.111: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:13.111: INFO: Pod daemon-set-wsn2n is not available Feb 4 13:35:13.115: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:14.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:14.112: INFO: Pod daemon-set-wsn2n is not available Feb 4 13:35:14.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:15.113: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:15.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:16.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:16.112: INFO: Pod daemon-set-9rsc8 is not available Feb 4 13:35:16.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:17.113: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:17.113: INFO: Pod daemon-set-9rsc8 is not available Feb 4 13:35:17.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:18.138: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:18.138: INFO: Pod daemon-set-9rsc8 is not available Feb 4 13:35:18.141: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:19.112: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:19.113: INFO: Pod daemon-set-9rsc8 is not available Feb 4 13:35:19.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:20.113: INFO: Wrong image for pod: daemon-set-9rsc8. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.26, got: docker.io/library/httpd:2.4.38-alpine. Feb 4 13:35:20.113: INFO: Pod daemon-set-9rsc8 is not available Feb 4 13:35:20.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:21.156: INFO: Pod daemon-set-qtpzg is not available Feb 4 13:35:21.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Feb 4 13:35:21.207: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:21.210: INFO: Number of nodes with available pods: 1 Feb 4 13:35:21.210: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:35:22.215: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:22.217: INFO: Number of nodes with available pods: 1 Feb 4 13:35:22.218: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:35:23.216: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:23.220: INFO: Number of nodes with available pods: 1 Feb 4 13:35:23.220: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:35:24.215: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:24.219: INFO: Number of nodes with available pods: 1 Feb 4 13:35:24.219: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:35:25.216: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:35:25.220: INFO: Number of nodes with available pods: 2 Feb 4 13:35:25.220: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7643, will wait for the garbage collector to delete the pods Feb 4 13:35:25.294: INFO: Deleting DaemonSet.extensions daemon-set took: 7.892827ms Feb 4 13:35:25.894: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.234224ms Feb 4 13:36:10.797: INFO: Number of nodes with available pods: 0 Feb 4 13:36:10.797: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 13:36:10.800: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"2093201"},"items":null} Feb 4 13:36:10.803: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2093201"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:36:10.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7643" for this suite. • [SLOW TEST:79.044 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":311,"completed":151,"skipped":2511,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:36:10.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 4 13:36:24.217: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:24.259: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:26.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:26.263: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:28.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:28.360: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:30.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:30.277: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:32.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:32.264: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:34.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:34.263: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:36.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:36.268: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:38.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:38.263: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:40.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:40.264: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:42.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:42.265: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:44.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:44.264: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:46.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:46.263: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:48.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:48.264: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:50.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:50.265: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:52.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:52.264: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:54.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:54.264: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:56.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:56.282: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:36:58.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:36:58.263: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:37:00.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:37:00.265: INFO: Pod pod-with-poststart-exec-hook still exists Feb 4 13:37:02.259: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 4 13:37:02.264: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:37:02.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5788" for this suite. • [SLOW TEST:51.455 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":311,"completed":152,"skipped":2518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:37:02.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 4 13:37:02.442: INFO: Waiting up to 5m0s for pod "pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0" in namespace "emptydir-621" to be "Succeeded or Failed" Feb 4 13:37:02.445: INFO: Pod "pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.869531ms Feb 4 13:37:04.449: INFO: Pod "pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006811107s Feb 4 13:37:06.454: INFO: Pod "pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0": Phase="Running", Reason="", readiness=true. Elapsed: 4.012474634s Feb 4 13:37:08.459: INFO: Pod "pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016695276s STEP: Saw pod success Feb 4 13:37:08.459: INFO: Pod "pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0" satisfied condition "Succeeded or Failed" Feb 4 13:37:08.462: INFO: Trying to get logs from node latest-worker pod pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0 container test-container: STEP: delete the pod Feb 4 13:37:08.529: INFO: Waiting for pod pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0 to disappear Feb 4 13:37:08.543: INFO: Pod pod-a36cbec0-97b6-4f7c-8409-c0fc16bdafa0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:37:08.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-621" for this suite. • [SLOW TEST:6.300 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":153,"skipped":2542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:37:08.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-a2dcb413-1b71-416f-bc9e-3d0926b0170d STEP: Creating a pod to test consume configMaps Feb 4 13:37:08.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705" in namespace "configmap-7506" to be "Succeeded or Failed" Feb 4 13:37:08.753: INFO: Pod "pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705": Phase="Pending", Reason="", readiness=false. Elapsed: 3.355846ms Feb 4 13:37:10.870: INFO: Pod "pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120522529s Feb 4 13:37:12.875: INFO: Pod "pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705": Phase="Running", Reason="", readiness=true. Elapsed: 4.124995771s Feb 4 13:37:14.879: INFO: Pod "pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129110859s STEP: Saw pod success Feb 4 13:37:14.879: INFO: Pod "pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705" satisfied condition "Succeeded or Failed" Feb 4 13:37:14.882: INFO: Trying to get logs from node latest-worker pod pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705 container configmap-volume-test: STEP: delete the pod Feb 4 13:37:14.922: INFO: Waiting for pod pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705 to disappear Feb 4 13:37:14.951: INFO: Pod pod-configmaps-95f9b475-1ac6-49cb-b747-8e6ba7720705 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:37:14.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7506" for this suite. • [SLOW TEST:6.386 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":311,"completed":154,"skipped":2565,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:37:14.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:37:19.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3775" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":311,"completed":155,"skipped":2581,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:37:19.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:37:36.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7447" for this suite. • [SLOW TEST:17.224 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":311,"completed":156,"skipped":2584,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:37:36.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:37:47.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7364" for this suite. • [SLOW TEST:11.165 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":311,"completed":157,"skipped":2585,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:37:47.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-7451 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 4 13:37:47.592: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 4 13:37:47.658: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:37:49.827: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:37:51.663: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:37:53.663: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:37:55.661: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:37:57.663: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:37:59.662: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:38:01.662: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:38:03.662: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:38:05.663: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:38:07.662: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:38:09.671: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 4 13:38:09.677: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 4 13:38:13.724: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 4 13:38:13.724: INFO: Breadth first check of 10.244.2.213 on host 172.18.0.14... Feb 4 13:38:13.727: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.214:9080/dial?request=hostname&protocol=http&host=10.244.2.213&port=8080&tries=1'] Namespace:pod-network-test-7451 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:38:13.727: INFO: >>> kubeConfig: /root/.kube/config I0204 13:38:13.765335 7 log.go:181] (0xc000154bb0) (0xc00114c500) Create stream I0204 13:38:13.765368 7 log.go:181] (0xc000154bb0) (0xc00114c500) Stream added, broadcasting: 1 I0204 13:38:13.767270 7 log.go:181] (0xc000154bb0) Reply frame received for 1 I0204 13:38:13.767310 7 log.go:181] (0xc000154bb0) (0xc00114c5a0) Create stream I0204 13:38:13.767327 7 log.go:181] (0xc000154bb0) (0xc00114c5a0) Stream added, broadcasting: 3 I0204 13:38:13.768163 7 log.go:181] (0xc000154bb0) Reply frame received for 3 I0204 13:38:13.768215 7 log.go:181] (0xc000154bb0) (0xc0012620a0) Create stream I0204 13:38:13.768234 7 log.go:181] (0xc000154bb0) (0xc0012620a0) Stream added, broadcasting: 5 I0204 13:38:13.769297 7 log.go:181] (0xc000154bb0) Reply frame received for 5 I0204 13:38:13.854862 7 log.go:181] (0xc000154bb0) Data frame received for 3 I0204 13:38:13.854885 7 log.go:181] (0xc00114c5a0) (3) Data frame handling I0204 13:38:13.854898 7 log.go:181] (0xc00114c5a0) (3) Data frame sent I0204 13:38:13.855148 7 log.go:181] (0xc000154bb0) Data frame received for 3 I0204 13:38:13.855248 7 log.go:181] (0xc00114c5a0) (3) Data frame handling I0204 13:38:13.855285 7 log.go:181] (0xc000154bb0) Data frame received for 5 I0204 13:38:13.855292 7 log.go:181] (0xc0012620a0) (5) Data frame handling I0204 13:38:13.857080 7 log.go:181] (0xc000154bb0) Data frame received for 1 I0204 13:38:13.857092 7 log.go:181] (0xc00114c500) (1) Data frame handling I0204 13:38:13.857102 7 log.go:181] (0xc00114c500) (1) Data frame sent I0204 13:38:13.857114 7 log.go:181] (0xc000154bb0) (0xc00114c500) Stream removed, broadcasting: 1 I0204 13:38:13.857212 7 log.go:181] (0xc000154bb0) (0xc00114c500) Stream removed, broadcasting: 1 I0204 13:38:13.857236 7 log.go:181] (0xc000154bb0) (0xc00114c5a0) Stream removed, broadcasting: 3 I0204 13:38:13.857405 7 log.go:181] (0xc000154bb0) Go away received I0204 13:38:13.857437 7 log.go:181] (0xc000154bb0) (0xc0012620a0) Stream removed, broadcasting: 5 Feb 4 13:38:13.857: INFO: Waiting for responses: map[] Feb 4 13:38:13.857: INFO: reached 10.244.2.213 after 0/1 tries Feb 4 13:38:13.857: INFO: Breadth first check of 10.244.1.60 on host 172.18.0.16... Feb 4 13:38:13.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.214:9080/dial?request=hostname&protocol=http&host=10.244.1.60&port=8080&tries=1'] Namespace:pod-network-test-7451 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:38:13.860: INFO: >>> kubeConfig: /root/.kube/config I0204 13:38:13.898338 7 log.go:181] (0xc002804630) (0xc0007e8280) Create stream I0204 13:38:13.898364 7 log.go:181] (0xc002804630) (0xc0007e8280) Stream added, broadcasting: 1 I0204 13:38:13.900209 7 log.go:181] (0xc002804630) Reply frame received for 1 I0204 13:38:13.900254 7 log.go:181] (0xc002804630) (0xc0007e8320) Create stream I0204 13:38:13.900280 7 log.go:181] (0xc002804630) (0xc0007e8320) Stream added, broadcasting: 3 I0204 13:38:13.901291 7 log.go:181] (0xc002804630) Reply frame received for 3 I0204 13:38:13.901347 7 log.go:181] (0xc002804630) (0xc00114c780) Create stream I0204 13:38:13.901360 7 log.go:181] (0xc002804630) (0xc00114c780) Stream added, broadcasting: 5 I0204 13:38:13.902152 7 log.go:181] (0xc002804630) Reply frame received for 5 I0204 13:38:13.978215 7 log.go:181] (0xc002804630) Data frame received for 3 I0204 13:38:13.978236 7 log.go:181] (0xc0007e8320) (3) Data frame handling I0204 13:38:13.978250 7 log.go:181] (0xc0007e8320) (3) Data frame sent I0204 13:38:13.979271 7 log.go:181] (0xc002804630) Data frame received for 5 I0204 13:38:13.979306 7 log.go:181] (0xc00114c780) (5) Data frame handling I0204 13:38:13.979358 7 log.go:181] (0xc002804630) Data frame received for 3 I0204 13:38:13.979373 7 log.go:181] (0xc0007e8320) (3) Data frame handling I0204 13:38:13.980794 7 log.go:181] (0xc002804630) Data frame received for 1 I0204 13:38:13.980807 7 log.go:181] (0xc0007e8280) (1) Data frame handling I0204 13:38:13.980816 7 log.go:181] (0xc0007e8280) (1) Data frame sent I0204 13:38:13.980830 7 log.go:181] (0xc002804630) (0xc0007e8280) Stream removed, broadcasting: 1 I0204 13:38:13.981036 7 log.go:181] (0xc002804630) (0xc0007e8280) Stream removed, broadcasting: 1 I0204 13:38:13.981066 7 log.go:181] (0xc002804630) (0xc0007e8320) Stream removed, broadcasting: 3 I0204 13:38:13.981157 7 log.go:181] (0xc002804630) Go away received I0204 13:38:13.981234 7 log.go:181] (0xc002804630) (0xc00114c780) Stream removed, broadcasting: 5 Feb 4 13:38:13.981: INFO: Waiting for responses: map[] Feb 4 13:38:13.981: INFO: reached 10.244.1.60 after 0/1 tries Feb 4 13:38:13.981: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:38:13.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7451" for this suite. • [SLOW TEST:26.482 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":311,"completed":158,"skipped":2588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:38:13.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 4 13:38:14.152: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:14.155: INFO: Number of nodes with available pods: 0 Feb 4 13:38:14.155: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:15.160: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:15.163: INFO: Number of nodes with available pods: 0 Feb 4 13:38:15.163: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:16.161: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:16.164: INFO: Number of nodes with available pods: 0 Feb 4 13:38:16.164: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:17.303: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:17.307: INFO: Number of nodes with available pods: 0 Feb 4 13:38:17.307: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:18.303: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:18.721: INFO: Number of nodes with available pods: 0 Feb 4 13:38:18.721: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:19.183: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:19.200: INFO: Number of nodes with available pods: 0 Feb 4 13:38:19.200: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:20.324: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:20.327: INFO: Number of nodes with available pods: 2 Feb 4 13:38:20.327: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 4 13:38:20.389: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:20.393: INFO: Number of nodes with available pods: 1 Feb 4 13:38:20.393: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:21.450: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:21.672: INFO: Number of nodes with available pods: 1 Feb 4 13:38:21.672: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:22.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:22.407: INFO: Number of nodes with available pods: 1 Feb 4 13:38:22.407: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:23.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:23.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:23.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:24.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:24.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:24.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:25.403: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:25.435: INFO: Number of nodes with available pods: 1 Feb 4 13:38:25.435: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:26.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:26.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:26.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:27.426: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:27.445: INFO: Number of nodes with available pods: 1 Feb 4 13:38:27.445: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:28.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:28.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:28.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:29.401: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:29.405: INFO: Number of nodes with available pods: 1 Feb 4 13:38:29.405: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:30.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:30.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:30.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:31.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:31.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:31.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:32.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:32.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:32.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:33.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:33.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:33.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:34.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:34.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:34.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:35.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:35.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:35.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:36.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:36.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:36.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:37.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:37.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:37.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:38.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:38.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:38.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:39.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:39.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:39.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:40.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:40.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:40.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:41.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:41.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:41.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:42.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:42.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:42.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:43.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:43.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:43.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:44.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:44.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:44.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:45.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:45.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:45.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:46.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:46.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:46.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:47.401: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:47.405: INFO: Number of nodes with available pods: 1 Feb 4 13:38:47.405: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:48.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:48.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:48.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:49.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:49.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:49.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:50.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:50.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:50.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:51.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:51.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:51.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:52.402: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:52.406: INFO: Number of nodes with available pods: 1 Feb 4 13:38:52.406: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:53.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:53.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:53.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:54.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:54.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:54.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:55.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:55.403: INFO: Number of nodes with available pods: 1 Feb 4 13:38:55.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:56.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:56.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:56.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:57.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:57.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:57.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:58.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:58.404: INFO: Number of nodes with available pods: 1 Feb 4 13:38:58.404: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:38:59.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:38:59.402: INFO: Number of nodes with available pods: 1 Feb 4 13:38:59.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:39:00.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:39:00.403: INFO: Number of nodes with available pods: 1 Feb 4 13:39:00.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:39:01.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:39:01.403: INFO: Number of nodes with available pods: 1 Feb 4 13:39:01.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:39:02.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:39:02.402: INFO: Number of nodes with available pods: 1 Feb 4 13:39:02.402: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:39:03.471: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:39:03.475: INFO: Number of nodes with available pods: 1 Feb 4 13:39:03.475: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:39:04.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:39:04.403: INFO: Number of nodes with available pods: 1 Feb 4 13:39:04.403: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:39:05.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:39:05.403: INFO: Number of nodes with available pods: 2 Feb 4 13:39:05.403: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-233, will wait for the garbage collector to delete the pods Feb 4 13:39:05.467: INFO: Deleting DaemonSet.extensions daemon-set took: 6.95796ms Feb 4 13:39:06.067: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.235201ms Feb 4 13:40:10.770: INFO: Number of nodes with available pods: 0 Feb 4 13:40:10.770: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 13:40:10.773: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"2094507"},"items":null} Feb 4 13:40:10.774: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2094507"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:10.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-233" for this suite. • [SLOW TEST:116.796 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":311,"completed":159,"skipped":2611,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:10.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Feb 4 13:40:17.475: INFO: Successfully updated pod "labelsupdate5d5ad735-7e71-49ad-905e-ce4b721db7dd" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:19.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6183" for this suite. • [SLOW TEST:8.734 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":311,"completed":160,"skipped":2615,"failed":0} SSSS ------------------------------ [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:19.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready Feb 4 13:40:19.630: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:19.630: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:19.651: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:19.651: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:19.772: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:19.772: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:19.886: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:19.886: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 and labels map[test-deployment-static:true] Feb 4 13:40:24.167: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment-static:true] Feb 4 13:40:24.167: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment-static:true] Feb 4 13:40:25.427: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment Feb 4 13:40:25.463: INFO: observed event type ADDED STEP: waiting for Replicas to scale Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 0 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:25.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:25.466: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.466: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.466: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.466: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.503: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.503: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.578: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.578: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.868: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:25.868: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 2 Feb 4 13:40:26.026: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 STEP: listing Deployments Feb 4 13:40:26.296: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment Feb 4 13:40:26.370: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 STEP: fetching the DeploymentStatus Feb 4 13:40:26.395: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 4 13:40:26.465: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 4 13:40:26.518: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 4 13:40:27.086: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 4 13:40:27.182: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 4 13:40:27.194: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 4 13:40:27.338: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Feb 4 13:40:27.344: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus Feb 4 13:40:32.507: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:32.507: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:32.507: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:32.507: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:32.507: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:32.507: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:32.507: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 Feb 4 13:40:32.508: INFO: observed Deployment test-deployment in namespace deployment-2520 with ReadyReplicas 1 STEP: deleting the Deployment Feb 4 13:40:32.656: INFO: observed event type MODIFIED Feb 4 13:40:32.656: INFO: observed event type MODIFIED Feb 4 13:40:32.656: INFO: observed event type MODIFIED Feb 4 13:40:32.656: INFO: observed event type MODIFIED Feb 4 13:40:32.656: INFO: observed event type MODIFIED Feb 4 13:40:32.656: INFO: observed event type MODIFIED Feb 4 13:40:32.656: INFO: observed event type MODIFIED Feb 4 13:40:32.657: INFO: observed event type MODIFIED Feb 4 13:40:32.657: INFO: observed event type MODIFIED Feb 4 13:40:32.657: INFO: observed event type MODIFIED Feb 4 13:40:32.657: INFO: observed event type MODIFIED Feb 4 13:40:32.657: INFO: observed event type MODIFIED Feb 4 13:40:32.657: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 4 13:40:32.669: INFO: Log out all the ReplicaSets if there is no deployment created Feb 4 13:40:32.672: INFO: ReplicaSet "test-deployment-69b959cd7": &ReplicaSet{ObjectMeta:{test-deployment-69b959cd7 deployment-2520 977e0ca2-19d8-40a3-83e8-ace324f2938e 2094646 2 2021-02-04 13:40:19 +0000 UTC map[pod-template-hash:69b959cd7 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 1e24b0e9-219f-410f-bf2b-08a57f85dbad 0xc00724d147 0xc00724d148}] [] [{kube-controller-manager Update apps/v1 2021-02-04 13:40:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e24b0e9-219f-410f-bf2b-08a57f85dbad\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 69b959cd7,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:69b959cd7 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00724d1b0 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 4 13:40:32.675: INFO: pod: "test-deployment-69b959cd7-hflq6": &Pod{ObjectMeta:{test-deployment-69b959cd7-hflq6 test-deployment-69b959cd7- deployment-2520 13c005cd-3b18-4597-9437-3406fcd4f62a 2094615 0 2021-02-04 13:40:19 +0000 UTC map[pod-template-hash:69b959cd7 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-69b959cd7 977e0ca2-19d8-40a3-83e8-ace324f2938e 0xc00724d587 0xc00724d588}] [] [{kube-controller-manager Update v1 2021-02-04 13:40:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"977e0ca2-19d8-40a3-83e8-ace324f2938e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:40:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjgkb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjgkb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjgkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.68,StartTime:2021-02-04 13:40:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:40:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e,ContainerID:containerd://a5686c8fcbe3e357298c2880c931f236c782614aedeacf30f83c7d56c9709ae8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:40:32.675: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-2520 1be9f5ce-31c1-44f5-89ef-202b8dd6c7dc 2094719 3 2021-02-04 13:40:26 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 1e24b0e9-219f-410f-bf2b-08a57f85dbad 0xc00724d217 0xc00724d218}] [] [{kube-controller-manager Update apps/v1 2021-02-04 13:40:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e24b0e9-219f-410f-bf2b-08a57f85dbad\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00724d280 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 4 13:40:32.678: INFO: pod: "test-deployment-768947d6f5-d52qw": &Pod{ObjectMeta:{test-deployment-768947d6f5-d52qw test-deployment-768947d6f5- deployment-2520 7622f7f3-3f26-4f08-893b-83aa1ab75a93 2094702 0 2021-02-04 13:40:27 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 1be9f5ce-31c1-44f5-89ef-202b8dd6c7dc 0xc0055b1637 0xc0055b1638}] [] [{kube-controller-manager Update v1 2021-02-04 13:40:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1be9f5ce-31c1-44f5-89ef-202b8dd6c7dc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:40:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.223\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjgkb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjgkb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjgkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.223,StartTime:2021-02-04 13:40:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:40:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://811f9a718bd64932f3bcba7804a41d5b6e9183c7626e0c37cf3db72301873df3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:40:32.678: INFO: pod: "test-deployment-768947d6f5-n6p8p": &Pod{ObjectMeta:{test-deployment-768947d6f5-n6p8p test-deployment-768947d6f5- deployment-2520 cc5afb5c-9171-4331-a025-3a0dfd46f8b4 2094726 0 2021-02-04 13:40:32 +0000 UTC map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 1be9f5ce-31c1-44f5-89ef-202b8dd6c7dc 0xc0055b17f7 0xc0055b17f8}] [] [{kube-controller-manager Update v1 2021-02-04 13:40:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1be9f5ce-31c1-44f5-89ef-202b8dd6c7dc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:40:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjgkb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjgkb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjgkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:40:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:40:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:40:32.679: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-2520 e4b3d376-1950-47d4-9780-8873c1286922 2094723 4 2021-02-04 13:40:25 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 1e24b0e9-219f-410f-bf2b-08a57f85dbad 0xc00724d2e7 0xc00724d2e8}] [] [{kube-controller-manager Update apps/v1 2021-02-04 13:40:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1e24b0e9-219f-410f-bf2b-08a57f85dbad\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00724d368 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:32.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2520" for this suite. • [SLOW TEST:13.433 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":311,"completed":161,"skipped":2619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:32.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-map-4efad999-6c44-4a52-b051-f9b851bf282f STEP: Creating a pod to test consume configMaps Feb 4 13:40:33.074: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844" in namespace "configmap-6732" to be "Succeeded or Failed" Feb 4 13:40:33.077: INFO: Pod "pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844": Phase="Pending", Reason="", readiness=false. Elapsed: 3.280828ms Feb 4 13:40:35.081: INFO: Pod "pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007262542s Feb 4 13:40:37.086: INFO: Pod "pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011572701s Feb 4 13:40:39.089: INFO: Pod "pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015129469s STEP: Saw pod success Feb 4 13:40:39.089: INFO: Pod "pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844" satisfied condition "Succeeded or Failed" Feb 4 13:40:39.092: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844 container agnhost-container: STEP: delete the pod Feb 4 13:40:39.185: INFO: Waiting for pod pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844 to disappear Feb 4 13:40:39.228: INFO: Pod pod-configmaps-4ea5b9b1-b7a1-4594-b214-743edf142844 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:39.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6732" for this suite. • [SLOW TEST:6.281 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":311,"completed":162,"skipped":2708,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:39.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 13:40:39.963: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 13:40:42.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042840, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042840, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042840, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748042839, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 13:40:45.237: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:40:45.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9326-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:46.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8359" for this suite. STEP: Destroying namespace "webhook-8359-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.321 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":311,"completed":163,"skipped":2713,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:46.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:47.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8879" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":311,"completed":164,"skipped":2717,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:47.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 4 13:40:47.443: INFO: Waiting up to 5m0s for pod "pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c" in namespace "emptydir-5631" to be "Succeeded or Failed" Feb 4 13:40:47.631: INFO: Pod "pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c": Phase="Pending", Reason="", readiness=false. Elapsed: 188.200765ms Feb 4 13:40:49.636: INFO: Pod "pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193078067s Feb 4 13:40:51.652: INFO: Pod "pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209284128s STEP: Saw pod success Feb 4 13:40:51.652: INFO: Pod "pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c" satisfied condition "Succeeded or Failed" Feb 4 13:40:51.655: INFO: Trying to get logs from node latest-worker pod pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c container test-container: STEP: delete the pod Feb 4 13:40:51.759: INFO: Waiting for pod pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c to disappear Feb 4 13:40:51.772: INFO: Pod pod-bb3fb8cd-2942-42f7-9421-87ec00cd176c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:51.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5631" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":165,"skipped":2767,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:51.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 13:40:52.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb" in namespace "downward-api-7857" to be "Succeeded or Failed" Feb 4 13:40:52.095: INFO: Pod "downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.423246ms Feb 4 13:40:54.122: INFO: Pod "downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06321277s Feb 4 13:40:56.126: INFO: Pod "downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb": Phase="Running", Reason="", readiness=true. Elapsed: 4.067655368s Feb 4 13:40:58.131: INFO: Pod "downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072319749s STEP: Saw pod success Feb 4 13:40:58.131: INFO: Pod "downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb" satisfied condition "Succeeded or Failed" Feb 4 13:40:58.134: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb container client-container: STEP: delete the pod Feb 4 13:40:58.215: INFO: Waiting for pod downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb to disappear Feb 4 13:40:58.229: INFO: Pod downwardapi-volume-f345b4f5-1cf6-45dd-9bf0-4b936b1b9ceb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:40:58.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7857" for this suite. • [SLOW TEST:6.317 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":166,"skipped":2805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:40:58.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Feb 4 13:41:03.103: INFO: Successfully updated pod "labelsupdatea7d07823-d140-4b79-bebd-8caf586b5bea" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:41:05.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1748" for this suite. • [SLOW TEST:6.920 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":311,"completed":167,"skipped":2836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:41:05.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-8253 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 4 13:41:05.246: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 4 13:41:05.335: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:41:07.410: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:41:09.344: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 13:41:11.341: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:41:13.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:41:15.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:41:17.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:41:19.340: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:41:21.365: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 13:41:23.338: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 4 13:41:23.343: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 4 13:41:25.349: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 4 13:41:27.347: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 4 13:41:29.348: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 4 13:41:35.431: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 4 13:41:35.431: INFO: Going to poll 10.244.2.230 on port 8081 at least 0 times, with a maximum of 34 tries before failing Feb 4 13:41:35.434: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.230 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8253 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:41:35.434: INFO: >>> kubeConfig: /root/.kube/config I0204 13:41:35.479426 7 log.go:181] (0xc000155600) (0xc002cb3ea0) Create stream I0204 13:41:35.479460 7 log.go:181] (0xc000155600) (0xc002cb3ea0) Stream added, broadcasting: 1 I0204 13:41:35.481414 7 log.go:181] (0xc000155600) Reply frame received for 1 I0204 13:41:35.481474 7 log.go:181] (0xc000155600) (0xc0036aa000) Create stream I0204 13:41:35.481500 7 log.go:181] (0xc000155600) (0xc0036aa000) Stream added, broadcasting: 3 I0204 13:41:35.482391 7 log.go:181] (0xc000155600) Reply frame received for 3 I0204 13:41:35.482425 7 log.go:181] (0xc000155600) (0xc000f05ea0) Create stream I0204 13:41:35.482438 7 log.go:181] (0xc000155600) (0xc000f05ea0) Stream added, broadcasting: 5 I0204 13:41:35.483320 7 log.go:181] (0xc000155600) Reply frame received for 5 I0204 13:41:36.556499 7 log.go:181] (0xc000155600) Data frame received for 3 I0204 13:41:36.556543 7 log.go:181] (0xc0036aa000) (3) Data frame handling I0204 13:41:36.556565 7 log.go:181] (0xc0036aa000) (3) Data frame sent I0204 13:41:36.556878 7 log.go:181] (0xc000155600) Data frame received for 3 I0204 13:41:36.556917 7 log.go:181] (0xc0036aa000) (3) Data frame handling I0204 13:41:36.557059 7 log.go:181] (0xc000155600) Data frame received for 5 I0204 13:41:36.557082 7 log.go:181] (0xc000f05ea0) (5) Data frame handling I0204 13:41:36.559159 7 log.go:181] (0xc000155600) Data frame received for 1 I0204 13:41:36.559182 7 log.go:181] (0xc002cb3ea0) (1) Data frame handling I0204 13:41:36.559206 7 log.go:181] (0xc002cb3ea0) (1) Data frame sent I0204 13:41:36.559224 7 log.go:181] (0xc000155600) (0xc002cb3ea0) Stream removed, broadcasting: 1 I0204 13:41:36.559320 7 log.go:181] (0xc000155600) (0xc002cb3ea0) Stream removed, broadcasting: 1 I0204 13:41:36.559341 7 log.go:181] (0xc000155600) (0xc0036aa000) Stream removed, broadcasting: 3 I0204 13:41:36.559352 7 log.go:181] (0xc000155600) (0xc000f05ea0) Stream removed, broadcasting: 5 Feb 4 13:41:36.559: INFO: Found all 1 expected endpoints: [netserver-0] Feb 4 13:41:36.559: INFO: Going to poll 10.244.1.69 on port 8081 at least 0 times, with a maximum of 34 tries before failing I0204 13:41:36.559560 7 log.go:181] (0xc000155600) Go away received Feb 4 13:41:36.589: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.69 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8253 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 13:41:36.590: INFO: >>> kubeConfig: /root/.kube/config I0204 13:41:36.673512 7 log.go:181] (0xc002805d90) (0xc00273e460) Create stream I0204 13:41:36.673553 7 log.go:181] (0xc002805d90) (0xc00273e460) Stream added, broadcasting: 1 I0204 13:41:36.675971 7 log.go:181] (0xc002805d90) Reply frame received for 1 I0204 13:41:36.676031 7 log.go:181] (0xc002805d90) (0xc00273e500) Create stream I0204 13:41:36.676062 7 log.go:181] (0xc002805d90) (0xc00273e500) Stream added, broadcasting: 3 I0204 13:41:36.677325 7 log.go:181] (0xc002805d90) Reply frame received for 3 I0204 13:41:36.677371 7 log.go:181] (0xc002805d90) (0xc002cb3f40) Create stream I0204 13:41:36.677388 7 log.go:181] (0xc002805d90) (0xc002cb3f40) Stream added, broadcasting: 5 I0204 13:41:36.678438 7 log.go:181] (0xc002805d90) Reply frame received for 5 I0204 13:41:37.754096 7 log.go:181] (0xc002805d90) Data frame received for 3 I0204 13:41:37.754204 7 log.go:181] (0xc00273e500) (3) Data frame handling I0204 13:41:37.754250 7 log.go:181] (0xc00273e500) (3) Data frame sent I0204 13:41:37.754516 7 log.go:181] (0xc002805d90) Data frame received for 5 I0204 13:41:37.754574 7 log.go:181] (0xc002cb3f40) (5) Data frame handling I0204 13:41:37.755223 7 log.go:181] (0xc002805d90) Data frame received for 3 I0204 13:41:37.755302 7 log.go:181] (0xc00273e500) (3) Data frame handling I0204 13:41:37.757197 7 log.go:181] (0xc002805d90) Data frame received for 1 I0204 13:41:37.757231 7 log.go:181] (0xc00273e460) (1) Data frame handling I0204 13:41:37.757248 7 log.go:181] (0xc00273e460) (1) Data frame sent I0204 13:41:37.757268 7 log.go:181] (0xc002805d90) (0xc00273e460) Stream removed, broadcasting: 1 I0204 13:41:37.757332 7 log.go:181] (0xc002805d90) Go away received I0204 13:41:37.757378 7 log.go:181] (0xc002805d90) (0xc00273e460) Stream removed, broadcasting: 1 I0204 13:41:37.757417 7 log.go:181] (0xc002805d90) (0xc00273e500) Stream removed, broadcasting: 3 I0204 13:41:37.757442 7 log.go:181] (0xc002805d90) (0xc002cb3f40) Stream removed, broadcasting: 5 Feb 4 13:41:37.757: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:41:37.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8253" for this suite. • [SLOW TEST:32.608 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":168,"skipped":2862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:41:37.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:41:38.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9954" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":311,"completed":169,"skipped":2931,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:41:38.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 13:41:38.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c" in namespace "projected-7002" to be "Succeeded or Failed" Feb 4 13:41:38.195: INFO: Pod "downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 67.418493ms Feb 4 13:41:40.200: INFO: Pod "downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072578562s Feb 4 13:41:42.203: INFO: Pod "downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075964679s STEP: Saw pod success Feb 4 13:41:42.203: INFO: Pod "downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c" satisfied condition "Succeeded or Failed" Feb 4 13:41:42.206: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c container client-container: STEP: delete the pod Feb 4 13:41:42.273: INFO: Waiting for pod downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c to disappear Feb 4 13:41:42.289: INFO: Pod downwardapi-volume-ccfced8c-e64b-4a43-8081-2d589e774c1c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:41:42.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7002" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":311,"completed":170,"skipped":2941,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:41:42.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 4 13:41:42.845: INFO: Waiting up to 1m0s for all nodes to be ready Feb 4 13:42:42.869: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create pods that use 2/3 of node resources. Feb 4 13:42:42.890: INFO: Created pod: pod0-sched-preemption-low-priority Feb 4 13:42:43.181: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:43:25.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4803" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:103.648 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":311,"completed":171,"skipped":2949,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:43:25.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:43:26.038: INFO: Creating ReplicaSet my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb Feb 4 13:43:26.084: INFO: Pod name my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb: Found 0 pods out of 1 Feb 4 13:43:31.141: INFO: Pod name my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb: Found 1 pods out of 1 Feb 4 13:43:31.141: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb" is running Feb 4 13:43:31.152: INFO: Pod "my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb-2kqw9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 13:43:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 13:43:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 13:43:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 13:43:26 +0000 UTC Reason: Message:}]) Feb 4 13:43:31.152: INFO: Trying to dial the pod Feb 4 13:43:36.163: INFO: Controller my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb: Got expected result from replica 1 [my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb-2kqw9]: "my-hostname-basic-fb177027-5b66-42e6-a82b-a05c3f55dddb-2kqw9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:43:36.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5138" for this suite. • [SLOW TEST:10.224 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":311,"completed":172,"skipped":2959,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:43:36.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1064 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1064 STEP: Creating statefulset with conflicting port in namespace statefulset-1064 STEP: Waiting until pod test-pod will start running in namespace statefulset-1064 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1064 Feb 4 13:43:42.460: INFO: Observed stateful pod in namespace: statefulset-1064, name: ss-0, uid: 95a9ad75-7994-4485-9f09-5261dac24d5d, status phase: Pending. Waiting for statefulset controller to delete. Feb 4 13:43:42.576: INFO: Observed stateful pod in namespace: statefulset-1064, name: ss-0, uid: 95a9ad75-7994-4485-9f09-5261dac24d5d, status phase: Failed. Waiting for statefulset controller to delete. Feb 4 13:43:42.584: INFO: Observed stateful pod in namespace: statefulset-1064, name: ss-0, uid: 95a9ad75-7994-4485-9f09-5261dac24d5d, status phase: Failed. Waiting for statefulset controller to delete. Feb 4 13:43:42.632: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1064 STEP: Removing pod with conflicting port in namespace statefulset-1064 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1064 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 4 13:43:48.777: INFO: Deleting all statefulset in ns statefulset-1064 Feb 4 13:43:48.780: INFO: Scaling statefulset ss to 0 Feb 4 13:44:18.852: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 13:44:18.854: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:44:18.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1064" for this suite. • [SLOW TEST:42.706 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":311,"completed":173,"skipped":2975,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:44:18.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating secret secrets-8155/secret-test-253ef149-fbf1-4e58-817b-77afda9d1aea STEP: Creating a pod to test consume secrets Feb 4 13:44:19.097: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261" in namespace "secrets-8155" to be "Succeeded or Failed" Feb 4 13:44:19.105: INFO: Pod "pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371486ms Feb 4 13:44:21.109: INFO: Pod "pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012377767s Feb 4 13:44:23.113: INFO: Pod "pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261": Phase="Running", Reason="", readiness=true. Elapsed: 4.016020292s Feb 4 13:44:25.171: INFO: Pod "pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074541955s STEP: Saw pod success Feb 4 13:44:25.171: INFO: Pod "pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261" satisfied condition "Succeeded or Failed" Feb 4 13:44:25.175: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261 container env-test: STEP: delete the pod Feb 4 13:44:25.311: INFO: Waiting for pod pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261 to disappear Feb 4 13:44:25.360: INFO: Pod pod-configmaps-aa22ece4-cde3-4b44-9727-2bd3a4f47261 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:44:25.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8155" for this suite. • [SLOW TEST:6.497 seconds] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":311,"completed":174,"skipped":2988,"failed":0} S ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:44:25.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating all guestbook components Feb 4 13:44:25.466: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Feb 4 13:44:25.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 create -f -' Feb 4 13:44:29.045: INFO: stderr: "" Feb 4 13:44:29.045: INFO: stdout: "service/agnhost-replica created\n" Feb 4 13:44:29.045: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Feb 4 13:44:29.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 create -f -' Feb 4 13:44:29.422: INFO: stderr: "" Feb 4 13:44:29.422: INFO: stdout: "service/agnhost-primary created\n" Feb 4 13:44:29.422: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 4 13:44:29.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 create -f -' Feb 4 13:44:29.759: INFO: stderr: "" Feb 4 13:44:29.759: INFO: stdout: "service/frontend created\n" Feb 4 13:44:29.759: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.26 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Feb 4 13:44:29.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 create -f -' Feb 4 13:44:30.035: INFO: stderr: "" Feb 4 13:44:30.035: INFO: stdout: "deployment.apps/frontend created\n" Feb 4 13:44:30.035: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.26 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 4 13:44:30.035: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 create -f -' Feb 4 13:44:30.409: INFO: stderr: "" Feb 4 13:44:30.409: INFO: stdout: "deployment.apps/agnhost-primary created\n" Feb 4 13:44:30.410: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.26 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 4 13:44:30.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 create -f -' Feb 4 13:44:30.730: INFO: stderr: "" Feb 4 13:44:30.730: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Feb 4 13:44:30.730: INFO: Waiting for all frontend pods to be Running. Feb 4 13:44:40.780: INFO: Waiting for frontend to serve content. Feb 4 13:44:41.006: INFO: Trying to add a new entry to the guestbook. Feb 4 13:44:41.017: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 4 13:44:41.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 delete --grace-period=0 --force -f -' Feb 4 13:44:41.513: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:44:41.513: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:44:41.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 delete --grace-period=0 --force -f -' Feb 4 13:44:41.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:44:41.755: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:44:41.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 delete --grace-period=0 --force -f -' Feb 4 13:44:42.578: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:44:42.578: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:44:42.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 delete --grace-period=0 --force -f -' Feb 4 13:44:42.677: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:44:42.677: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:44:42.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 delete --grace-period=0 --force -f -' Feb 4 13:44:42.870: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:44:42.870: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Feb 4 13:44:42.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-5888 delete --grace-period=0 --force -f -' Feb 4 13:44:43.137: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 13:44:43.137: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:44:43.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5888" for this suite. • [SLOW TEST:18.080 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":311,"completed":175,"skipped":2989,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:44:43.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:44:43.726: INFO: Creating deployment "webserver-deployment" Feb 4 13:44:43.819: INFO: Waiting for observed generation 1 Feb 4 13:44:46.183: INFO: Waiting for all required pods to come up Feb 4 13:44:46.455: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 4 13:44:58.898: INFO: Waiting for deployment "webserver-deployment" to complete Feb 4 13:44:58.903: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 4 13:44:58.915: INFO: Updating deployment webserver-deployment Feb 4 13:44:58.915: INFO: Waiting for observed generation 2 Feb 4 13:45:00.964: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 4 13:45:00.967: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 4 13:45:00.969: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 4 13:45:00.974: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 4 13:45:00.974: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 4 13:45:00.976: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 4 13:45:00.980: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 4 13:45:00.980: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 4 13:45:00.988: INFO: Updating deployment webserver-deployment Feb 4 13:45:00.988: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 4 13:45:01.843: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 4 13:45:02.028: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 4 13:45:06.024: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6758 bc913a00-f5a3-412e-a0be-cc8e0dcfd86b 2097226 3 2021-02-04 13:44:43 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-04 13:44:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-04 13:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005669008 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2021-02-04 13:45:01 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2021-02-04 13:45:02 +0000 UTC,LastTransitionTime:2021-02-04 13:44:43 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 4 13:45:06.226: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6758 b6b704d2-13fd-4a4b-b5ee-c589518ed159 2097214 3 2021-02-04 13:44:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment bc913a00-f5a3-412e-a0be-cc8e0dcfd86b 0xc0056693c7 0xc0056693c8}] [] [{kube-controller-manager Update apps/v1 2021-02-04 13:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc913a00-f5a3-412e-a0be-cc8e0dcfd86b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005669448 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 4 13:45:06.227: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 4 13:45:06.227: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6758 8c4a5863-216b-426e-81ec-6c6de7f75ae3 2097223 3 2021-02-04 13:44:43 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment bc913a00-f5a3-412e-a0be-cc8e0dcfd86b 0xc0056694a7 0xc0056694a8}] [] [{kube-controller-manager Update apps/v1 2021-02-04 13:44:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc913a00-f5a3-412e-a0be-cc8e0dcfd86b\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005669518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 4 13:45:06.398: INFO: Pod "webserver-deployment-795d758f88-4h4lq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4h4lq webserver-deployment-795d758f88- deployment-6758 7276ae26-5729-4a81-b2d5-fc4815366acc 2097232 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc005669967 0xc005669968}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.399: INFO: Pod "webserver-deployment-795d758f88-92wh5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-92wh5 webserver-deployment-795d758f88- deployment-6758 f30ac0be-c19c-43a1-b155-2b6913b8fcd7 2097273 0 2021-02-04 13:45:02 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc005669b17 0xc005669b18}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.399: INFO: Pod "webserver-deployment-795d758f88-d6n9p" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-d6n9p webserver-deployment-795d758f88- deployment-6758 c8bf9d7d-41a3-4f3a-812c-7794d1cb24bd 2097146 0 2021-02-04 13:44:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc005669cc7 0xc005669cc8}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:44:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.399: INFO: Pod "webserver-deployment-795d758f88-jhwkr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jhwkr webserver-deployment-795d758f88- deployment-6758 e2429013-1e01-41b4-b0e7-064266a36baa 2097284 0 2021-02-04 13:44:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc005669e87 0xc005669e88}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.84\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.84,StartTime:2021-02-04 13:44:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.400: INFO: Pod "webserver-deployment-795d758f88-kgrcn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kgrcn webserver-deployment-795d758f88- deployment-6758 5e398cb4-e23c-486e-aa92-8797aec2c75e 2097267 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd6107 0xc002dd6108}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.400: INFO: Pod "webserver-deployment-795d758f88-qcvqc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qcvqc webserver-deployment-795d758f88- deployment-6758 83209ea0-bbf6-4c99-8e5f-4186c03ab25c 2097236 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd6417 0xc002dd6418}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.400: INFO: Pod "webserver-deployment-795d758f88-qsb57" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qsb57 webserver-deployment-795d758f88- deployment-6758 1e6c9c02-95e2-413e-a831-a4a082bd7baa 2097212 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd6787 0xc002dd6788}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.400: INFO: Pod "webserver-deployment-795d758f88-qxtzw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qxtzw webserver-deployment-795d758f88- deployment-6758 28731651-222f-479f-899d-dcf17017621b 2097289 0 2021-02-04 13:44:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd6937 0xc002dd6938}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.5\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.5,StartTime:2021-02-04 13:44:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.401: INFO: Pod "webserver-deployment-795d758f88-rpcpn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rpcpn webserver-deployment-795d758f88- deployment-6758 5daf0265-9108-4650-85ae-f96942dedc44 2097147 0 2021-02-04 13:44:59 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd6b27 0xc002dd6b28}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:44:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.401: INFO: Pod "webserver-deployment-795d758f88-xjntn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-xjntn webserver-deployment-795d758f88- deployment-6758 e8255aa1-f7b4-423d-8575-22535eaf5347 2097129 0 2021-02-04 13:44:58 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd6ce7 0xc002dd6ce8}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:44:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.401: INFO: Pod "webserver-deployment-795d758f88-zbzcg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zbzcg webserver-deployment-795d758f88- deployment-6758 4dbad059-66ad-452c-9f63-46de4b0b356f 2097258 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd70b7 0xc002dd70b8}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.401: INFO: Pod "webserver-deployment-795d758f88-zcdhw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zcdhw webserver-deployment-795d758f88- deployment-6758 289b6929-c0d5-438f-8b42-a671c92dc440 2097250 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd7377 0xc002dd7378}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.402: INFO: Pod "webserver-deployment-795d758f88-zfkzk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zfkzk webserver-deployment-795d758f88- deployment-6758 ebe920c6-d07f-4813-b769-673a8c70b444 2097260 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b6b704d2-13fd-4a4b-b5ee-c589518ed159 0xc002dd7667 0xc002dd7668}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b6b704d2-13fd-4a4b-b5ee-c589518ed159\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.402: INFO: Pod "webserver-deployment-dd94f59b7-2gds8" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2gds8 webserver-deployment-dd94f59b7- deployment-6758 23ff33b0-4c8c-4ff3-9c22-ff7604b71a58 2097265 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc002dd78e7 0xc002dd78e8}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.402: INFO: Pod "webserver-deployment-dd94f59b7-4m7m7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4m7m7 webserver-deployment-dd94f59b7- deployment-6758 7b5be8fb-d238-400f-9823-4c8fd3f46d17 2097255 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc002dd7c97 0xc002dd7c98}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.402: INFO: Pod "webserver-deployment-dd94f59b7-7hc4b" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7hc4b webserver-deployment-dd94f59b7- deployment-6758 ac67e9e5-c54f-44e0-a89a-e5a66eb943db 2097244 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc002dd7ed7 0xc002dd7ed8}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.403: INFO: Pod "webserver-deployment-dd94f59b7-cjqld" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cjqld webserver-deployment-dd94f59b7- deployment-6758 df43a912-c5d5-4513-9285-9c36ba591bbf 2097217 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005724077 0xc005724078}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.403: INFO: Pod "webserver-deployment-dd94f59b7-dnf68" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dnf68 webserver-deployment-dd94f59b7- deployment-6758 0ddc5091-d4ba-4644-a687-cca6b4655239 2097008 0 2021-02-04 13:44:45 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005724207 0xc005724208}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.82\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.82,StartTime:2021-02-04 13:44:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b0676d9277ad5c1ca2494983d796435caef182e6f562729a6084d2946bc00e8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.403: INFO: Pod "webserver-deployment-dd94f59b7-jprdl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jprdl webserver-deployment-dd94f59b7- deployment-6758 100945cd-85fc-42f5-82e9-01da921d0b48 2097034 0 2021-02-04 13:44:45 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc0057243b7 0xc0057243b8}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.3\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.3,StartTime:2021-02-04 13:44:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0403cda2571b21e6f6e6f3fceebf8927b269d81c76762fb2ac9563bd4f8eb6ce,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.404: INFO: Pod "webserver-deployment-dd94f59b7-jr6sq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jr6sq webserver-deployment-dd94f59b7- deployment-6758 705ac1f4-49ac-444f-88fc-97a429aeed24 2097230 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005724567 0xc005724568}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.404: INFO: Pod "webserver-deployment-dd94f59b7-m6j6q" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-m6j6q webserver-deployment-dd94f59b7- deployment-6758 0de7b132-9111-41e0-a693-7bb5d570ba0a 2097024 0 2021-02-04 13:44:44 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc0057246f7 0xc0057246f8}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.252\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.252,StartTime:2021-02-04 13:44:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf58f3f071c9cb912916638cb1d393bbc308c8548533228f4fa35d60d7a230e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.252,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.404: INFO: Pod "webserver-deployment-dd94f59b7-mpl4x" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mpl4x webserver-deployment-dd94f59b7- deployment-6758 ac224862-c761-4056-82dd-26496f3ebe27 2097221 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc0057248a7 0xc0057248a8}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.404: INFO: Pod "webserver-deployment-dd94f59b7-nmxlg" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nmxlg webserver-deployment-dd94f59b7- deployment-6758 ad86c9f2-eaa0-4990-ac26-33cbf06a96b2 2097276 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005724a47 0xc005724a48}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.405: INFO: Pod "webserver-deployment-dd94f59b7-nvjsz" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nvjsz webserver-deployment-dd94f59b7- deployment-6758 c6729931-cfb9-4ef8-b651-1ddf7286edff 2097051 0 2021-02-04 13:44:45 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005724bd7 0xc005724bd8}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.2,StartTime:2021-02-04 13:44:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b3146fcfc7be96d6a29003b9c104b1fe303541577952ac2e2c287bfaa8a5b7af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.405: INFO: Pod "webserver-deployment-dd94f59b7-p4blk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-p4blk webserver-deployment-dd94f59b7- deployment-6758 4c9026f7-b60e-41fe-beae-f067ea0a399b 2097031 0 2021-02-04 13:44:45 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005724d87 0xc005724d88}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.83\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.83,StartTime:2021-02-04 13:44:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://379d99e7c9b29fec0069f47910283aa03533193cb0a3f2794dd01f9a7f86e1d3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.405: INFO: Pod "webserver-deployment-dd94f59b7-rhj4d" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rhj4d webserver-deployment-dd94f59b7- deployment-6758 ee1fc109-52d0-45c0-8f87-5aa2ae239613 2097200 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005724f67 0xc005724f68}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.405: INFO: Pod "webserver-deployment-dd94f59b7-rkxmn" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rkxmn webserver-deployment-dd94f59b7- deployment-6758 1f31ce06-d8c1-4277-8f8a-072ae5ba9d8c 2097238 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc0057250f7 0xc0057250f8}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.406: INFO: Pod "webserver-deployment-dd94f59b7-sg5bc" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sg5bc webserver-deployment-dd94f59b7- deployment-6758 cff7ab12-123d-4101-97c3-9527d625881f 2097021 0 2021-02-04 13:44:45 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005725287 0xc005725288}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:10.244.1.81,StartTime:2021-02-04 13:44:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ec3577b824d9f7d57751853d3b4ba5d98c2777eaebb82db9782125bcfc00d7f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.406: INFO: Pod "webserver-deployment-dd94f59b7-x9cf6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-x9cf6 webserver-deployment-dd94f59b7- deployment-6758 6856bfe9-282b-46bf-ad87-5718de4c0bd3 2097227 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005725437 0xc005725438}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.406: INFO: Pod "webserver-deployment-dd94f59b7-xd24l" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xd24l webserver-deployment-dd94f59b7- deployment-6758 3f48fdda-a7b7-48fe-8220-28ce0dd580bc 2097044 0 2021-02-04 13:44:44 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc0057255c7 0xc0057255c8}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.249\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.249,StartTime:2021-02-04 13:44:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://46428526f5de595d5b998780a60de9ac959674ade1c34ae7b74fb9f67614992f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.406: INFO: Pod "webserver-deployment-dd94f59b7-xwgbf" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xwgbf webserver-deployment-dd94f59b7- deployment-6758 fd1d1838-e745-42fe-aa6c-b8f6eab97bbf 2097257 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005725777 0xc005725778}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.406: INFO: Pod "webserver-deployment-dd94f59b7-xwjrl" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xwjrl webserver-deployment-dd94f59b7- deployment-6758 3a8417f7-891f-4b3d-999f-dc918cf9a3d9 2097012 0 2021-02-04 13:44:44 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005725907 0xc005725908}] [] [{kube-controller-manager Update v1 2021-02-04 13:44:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:44:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.250\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:44:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.250,StartTime:2021-02-04 13:44:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 13:44:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://179f9e0921d327fce637ca7dd89c27e70ad6b832e154e42240083b41325bd826,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.250,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 4 13:45:06.407: INFO: Pod "webserver-deployment-dd94f59b7-z6dq6" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-z6dq6 webserver-deployment-dd94f59b7- deployment-6758 8a9798d8-aeea-43fa-afb6-d6629d098aa7 2097248 0 2021-02-04 13:45:01 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 8c4a5863-216b-426e-81ec-6c6de7f75ae3 0xc005725ad7 0xc005725ad8}] [] [{kube-controller-manager Update v1 2021-02-04 13:45:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c4a5863-216b-426e-81ec-6c6de7f75ae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 13:45:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8rjt8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8rjt8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8rjt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 13:45:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.16,PodIP:,StartTime:2021-02-04 13:45:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:45:06.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6758" for this suite. • [SLOW TEST:23.138 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":311,"completed":176,"skipped":3006,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:45:06.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating Agnhost RC Feb 4 13:45:07.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4773 create -f -' Feb 4 13:45:08.813: INFO: stderr: "" Feb 4 13:45:08.813: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Feb 4 13:45:09.920: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:09.920: INFO: Found 0 / 1 Feb 4 13:45:10.817: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:10.817: INFO: Found 0 / 1 Feb 4 13:45:12.151: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:12.152: INFO: Found 0 / 1 Feb 4 13:45:12.849: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:12.849: INFO: Found 0 / 1 Feb 4 13:45:13.960: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:13.960: INFO: Found 0 / 1 Feb 4 13:45:15.345: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:15.345: INFO: Found 0 / 1 Feb 4 13:45:16.762: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:16.762: INFO: Found 0 / 1 Feb 4 13:45:17.505: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:17.505: INFO: Found 0 / 1 Feb 4 13:45:18.634: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:18.634: INFO: Found 0 / 1 Feb 4 13:45:19.236: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:19.236: INFO: Found 0 / 1 Feb 4 13:45:20.173: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:20.174: INFO: Found 0 / 1 Feb 4 13:45:21.019: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:21.019: INFO: Found 0 / 1 Feb 4 13:45:22.317: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:22.317: INFO: Found 0 / 1 Feb 4 13:45:22.927: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:22.927: INFO: Found 0 / 1 Feb 4 13:45:23.987: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:23.987: INFO: Found 0 / 1 Feb 4 13:45:25.245: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:25.245: INFO: Found 1 / 1 Feb 4 13:45:25.245: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 4 13:45:25.431: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:25.431: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 4 13:45:25.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4773 patch pod agnhost-primary-fc4qv -p {"metadata":{"annotations":{"x":"y"}}}' Feb 4 13:45:25.827: INFO: stderr: "" Feb 4 13:45:25.827: INFO: stdout: "pod/agnhost-primary-fc4qv patched\n" STEP: checking annotations Feb 4 13:45:25.878: INFO: Selector matched 1 pods for map[app:agnhost] Feb 4 13:45:25.878: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:45:25.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4773" for this suite. • [SLOW TEST:19.310 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":311,"completed":177,"skipped":3009,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:45:25.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:45:26.473: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 4 13:45:30.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7285 --namespace=crd-publish-openapi-7285 create -f -' Feb 4 13:45:37.746: INFO: stderr: "" Feb 4 13:45:37.746: INFO: stdout: "e2e-test-crd-publish-openapi-1611-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 4 13:45:37.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7285 --namespace=crd-publish-openapi-7285 delete e2e-test-crd-publish-openapi-1611-crds test-cr' Feb 4 13:45:37.849: INFO: stderr: "" Feb 4 13:45:37.849: INFO: stdout: "e2e-test-crd-publish-openapi-1611-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Feb 4 13:45:37.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7285 --namespace=crd-publish-openapi-7285 apply -f -' Feb 4 13:45:38.120: INFO: stderr: "" Feb 4 13:45:38.120: INFO: stdout: "e2e-test-crd-publish-openapi-1611-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 4 13:45:38.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7285 --namespace=crd-publish-openapi-7285 delete e2e-test-crd-publish-openapi-1611-crds test-cr' Feb 4 13:45:38.233: INFO: stderr: "" Feb 4 13:45:38.233: INFO: stdout: "e2e-test-crd-publish-openapi-1611-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 4 13:45:38.233: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7285 explain e2e-test-crd-publish-openapi-1611-crds' Feb 4 13:45:38.541: INFO: stderr: "" Feb 4 13:45:38.541: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1611-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:45:42.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7285" for this suite. • [SLOW TEST:16.220 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":311,"completed":178,"skipped":3022,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:45:42.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Feb 4 13:45:42.242: INFO: PodSpec: initContainers in spec.initContainers Feb 4 13:46:40.730: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7a095ee2-5dc1-432c-b39e-ba62edfbae19", GenerateName:"", Namespace:"init-container-3340", SelfLink:"", UID:"afb69bb7-c964-4145-9267-35f36294d811", ResourceVersion:"2097916", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63748043142, loc:(*time.Location)(0x7886c60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"242833762"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c6b960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c6b980)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002c6b9a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002c6b9c0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ljq56", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006860f00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ljq56", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ljq56", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ljq56", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004536a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002b04150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004536b30)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004536b50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004536b58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004536b5c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0045e5bf0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043142, loc:(*time.Location)(0x7886c60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043142, loc:(*time.Location)(0x7886c60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043142, loc:(*time.Location)(0x7886c60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043142, loc:(*time.Location)(0x7886c60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.14", PodIP:"10.244.2.20", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.20"}}, StartTime:(*v1.Time)(0xc002c6b9e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002c6ba20), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002b04230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://a64515e7148d0760efd830cb83944fe8c7f65ac03a445757d995349dd424a8ce", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c6ba40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c6ba00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004536bdf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:46:40.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3340" for this suite. • [SLOW TEST:58.971 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":311,"completed":179,"skipped":3036,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:46:41.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 4 13:46:41.452: INFO: Waiting up to 5m0s for pod "pod-974633e9-8f1e-4684-9e77-a64417db7db0" in namespace "emptydir-959" to be "Succeeded or Failed" Feb 4 13:46:41.599: INFO: Pod "pod-974633e9-8f1e-4684-9e77-a64417db7db0": Phase="Pending", Reason="", readiness=false. Elapsed: 146.777539ms Feb 4 13:46:43.602: INFO: Pod "pod-974633e9-8f1e-4684-9e77-a64417db7db0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15045039s Feb 4 13:46:45.643: INFO: Pod "pod-974633e9-8f1e-4684-9e77-a64417db7db0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191395265s Feb 4 13:46:48.029: INFO: Pod "pod-974633e9-8f1e-4684-9e77-a64417db7db0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.577262763s Feb 4 13:46:50.034: INFO: Pod "pod-974633e9-8f1e-4684-9e77-a64417db7db0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.58216788s STEP: Saw pod success Feb 4 13:46:50.034: INFO: Pod "pod-974633e9-8f1e-4684-9e77-a64417db7db0" satisfied condition "Succeeded or Failed" Feb 4 13:46:50.037: INFO: Trying to get logs from node latest-worker2 pod pod-974633e9-8f1e-4684-9e77-a64417db7db0 container test-container: STEP: delete the pod Feb 4 13:46:50.101: INFO: Waiting for pod pod-974633e9-8f1e-4684-9e77-a64417db7db0 to disappear Feb 4 13:46:50.256: INFO: Pod pod-974633e9-8f1e-4684-9e77-a64417db7db0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:46:50.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-959" for this suite. • [SLOW TEST:9.170 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":180,"skipped":3061,"failed":0} [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:46:50.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:46:50.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7685" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":311,"completed":181,"skipped":3061,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:46:50.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Feb 4 13:46:50.740: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Feb 4 13:46:50.743: INFO: starting watch STEP: patching STEP: updating Feb 4 13:46:50.774: INFO: waiting for watch events with expected annotations Feb 4 13:46:50.774: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:46:51.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-4465" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":311,"completed":182,"skipped":3110,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:46:51.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:46:55.387: INFO: Deleting pod "var-expansion-847a0787-8849-40b0-b873-42a3589d4714" in namespace "var-expansion-5563" Feb 4 13:46:55.417: INFO: Wait up to 5m0s for pod "var-expansion-847a0787-8849-40b0-b873-42a3589d4714" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:47:31.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5563" for this suite. • [SLOW TEST:40.231 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":311,"completed":183,"skipped":3132,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:47:31.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1659.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1659.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1659.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1659.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1659.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 234.224.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.224.234_udp@PTR;check="$$(dig +tcp +noall +answer +search 234.224.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.224.234_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1659.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1659.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1659.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1659.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1659.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1659.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 234.224.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.224.234_udp@PTR;check="$$(dig +tcp +noall +answer +search 234.224.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.224.234_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 13:47:39.788: INFO: Unable to read wheezy_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.791: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.793: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.796: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.816: INFO: Unable to read jessie_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.818: INFO: Unable to read jessie_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.821: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.824: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:39.851: INFO: Lookups using dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146 failed for: [wheezy_udp@dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_udp@dns-test-service.dns-1659.svc.cluster.local jessie_tcp@dns-test-service.dns-1659.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local] Feb 4 13:47:44.856: INFO: Unable to read wheezy_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.860: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.863: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.866: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.899: INFO: Unable to read jessie_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.902: INFO: Unable to read jessie_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.906: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.909: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:44.932: INFO: Lookups using dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146 failed for: [wheezy_udp@dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_udp@dns-test-service.dns-1659.svc.cluster.local jessie_tcp@dns-test-service.dns-1659.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local] Feb 4 13:47:49.975: INFO: Unable to read wheezy_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:50.534: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:51.377: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:51.596: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:53.084: INFO: Unable to read jessie_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:53.087: INFO: Unable to read jessie_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:53.090: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:53.092: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:53.108: INFO: Lookups using dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146 failed for: [wheezy_udp@dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_udp@dns-test-service.dns-1659.svc.cluster.local jessie_tcp@dns-test-service.dns-1659.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local] Feb 4 13:47:54.855: INFO: Unable to read wheezy_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:54.859: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:54.862: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:54.864: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:55.118: INFO: Unable to read jessie_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:55.122: INFO: Unable to read jessie_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:55.126: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:55.130: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:55.148: INFO: Lookups using dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146 failed for: [wheezy_udp@dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_udp@dns-test-service.dns-1659.svc.cluster.local jessie_tcp@dns-test-service.dns-1659.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local] Feb 4 13:47:59.862: INFO: Unable to read wheezy_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.865: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.868: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.871: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.891: INFO: Unable to read jessie_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.893: INFO: Unable to read jessie_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.896: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.899: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:47:59.917: INFO: Lookups using dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146 failed for: [wheezy_udp@dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_udp@dns-test-service.dns-1659.svc.cluster.local jessie_tcp@dns-test-service.dns-1659.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local] Feb 4 13:48:05.246: INFO: Unable to read wheezy_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.270: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.294: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.515: INFO: Unable to read jessie_udp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.517: INFO: Unable to read jessie_tcp@dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.520: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.522: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local from pod dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146: the server could not find the requested resource (get pods dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146) Feb 4 13:48:05.538: INFO: Lookups using dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146 failed for: [wheezy_udp@dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@dns-test-service.dns-1659.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_udp@dns-test-service.dns-1659.svc.cluster.local jessie_tcp@dns-test-service.dns-1659.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1659.svc.cluster.local] Feb 4 13:48:10.345: INFO: DNS probes using dns-1659/dns-test-96d94b04-694d-41a4-b875-80ef5d9a1146 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:48:13.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1659" for this suite. • [SLOW TEST:43.149 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":311,"completed":184,"skipped":3154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:48:14.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Feb 4 13:48:17.049: INFO: starting watch STEP: patching STEP: updating Feb 4 13:48:17.407: INFO: waiting for watch events with expected annotations Feb 4 13:48:17.407: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:48:18.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-2941" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":311,"completed":185,"skipped":3218,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:48:18.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-2445 STEP: creating service affinity-clusterip in namespace services-2445 STEP: creating replication controller affinity-clusterip in namespace services-2445 I0204 13:48:20.494398 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2445, replica count: 3 I0204 13:48:23.544799 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:48:26.545031 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:48:29.545360 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:48:32.545621 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:48:35.545868 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 13:48:35.572: INFO: Creating new exec pod Feb 4 13:48:45.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-2445 exec execpod-affinity7cxjh -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Feb 4 13:48:45.462: INFO: stderr: "I0204 13:48:45.398777 3053 log.go:181] (0xc000212dc0) (0xc00094e3c0) Create stream\nI0204 13:48:45.398849 3053 log.go:181] (0xc000212dc0) (0xc00094e3c0) Stream added, broadcasting: 1\nI0204 13:48:45.400361 3053 log.go:181] (0xc000212dc0) Reply frame received for 1\nI0204 13:48:45.400403 3053 log.go:181] (0xc000212dc0) (0xc0006ea000) Create stream\nI0204 13:48:45.400418 3053 log.go:181] (0xc000212dc0) (0xc0006ea000) Stream added, broadcasting: 3\nI0204 13:48:45.401350 3053 log.go:181] (0xc000212dc0) Reply frame received for 3\nI0204 13:48:45.401377 3053 log.go:181] (0xc000212dc0) (0xc00082a000) Create stream\nI0204 13:48:45.401385 3053 log.go:181] (0xc000212dc0) (0xc00082a000) Stream added, broadcasting: 5\nI0204 13:48:45.402135 3053 log.go:181] (0xc000212dc0) Reply frame received for 5\nI0204 13:48:45.452525 3053 log.go:181] (0xc000212dc0) Data frame received for 5\nI0204 13:48:45.452558 3053 log.go:181] (0xc000212dc0) Data frame received for 3\nI0204 13:48:45.452581 3053 log.go:181] (0xc0006ea000) (3) Data frame handling\nI0204 13:48:45.452628 3053 log.go:181] (0xc00082a000) (5) Data frame handling\nI0204 13:48:45.452643 3053 log.go:181] (0xc00082a000) (5) Data frame sent\nI0204 13:48:45.452651 3053 log.go:181] (0xc000212dc0) Data frame received for 5\nI0204 13:48:45.452670 3053 log.go:181] (0xc00082a000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0204 13:48:45.455801 3053 log.go:181] (0xc000212dc0) Data frame received for 1\nI0204 13:48:45.455818 3053 log.go:181] (0xc00094e3c0) (1) Data frame handling\nI0204 13:48:45.455828 3053 log.go:181] (0xc00094e3c0) (1) Data frame sent\nI0204 13:48:45.455949 3053 log.go:181] (0xc000212dc0) (0xc00094e3c0) Stream removed, broadcasting: 1\nI0204 13:48:45.456012 3053 log.go:181] (0xc000212dc0) Go away received\nI0204 13:48:45.456309 3053 log.go:181] (0xc000212dc0) (0xc00094e3c0) Stream removed, broadcasting: 1\nI0204 13:48:45.456328 3053 log.go:181] (0xc000212dc0) (0xc0006ea000) Stream removed, broadcasting: 3\nI0204 13:48:45.456338 3053 log.go:181] (0xc000212dc0) (0xc00082a000) Stream removed, broadcasting: 5\n" Feb 4 13:48:45.462: INFO: stdout: "" Feb 4 13:48:45.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-2445 exec execpod-affinity7cxjh -- /bin/sh -x -c nc -zv -t -w 2 10.96.105.42 80' Feb 4 13:48:45.680: INFO: stderr: "I0204 13:48:45.621668 3071 log.go:181] (0xc000140370) (0xc00054e000) Create stream\nI0204 13:48:45.621732 3071 log.go:181] (0xc000140370) (0xc00054e000) Stream added, broadcasting: 1\nI0204 13:48:45.623024 3071 log.go:181] (0xc000140370) Reply frame received for 1\nI0204 13:48:45.623053 3071 log.go:181] (0xc000140370) (0xc0004d2780) Create stream\nI0204 13:48:45.623063 3071 log.go:181] (0xc000140370) (0xc0004d2780) Stream added, broadcasting: 3\nI0204 13:48:45.623747 3071 log.go:181] (0xc000140370) Reply frame received for 3\nI0204 13:48:45.623776 3071 log.go:181] (0xc000140370) (0xc00054e0a0) Create stream\nI0204 13:48:45.623789 3071 log.go:181] (0xc000140370) (0xc00054e0a0) Stream added, broadcasting: 5\nI0204 13:48:45.624357 3071 log.go:181] (0xc000140370) Reply frame received for 5\nI0204 13:48:45.672530 3071 log.go:181] (0xc000140370) Data frame received for 5\nI0204 13:48:45.672553 3071 log.go:181] (0xc00054e0a0) (5) Data frame handling\nI0204 13:48:45.672569 3071 log.go:181] (0xc00054e0a0) (5) Data frame sent\nI0204 13:48:45.672577 3071 log.go:181] (0xc000140370) Data frame received for 5\nI0204 13:48:45.672583 3071 log.go:181] (0xc00054e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.105.42 80\nConnection to 10.96.105.42 80 port [tcp/http] succeeded!\nI0204 13:48:45.672697 3071 log.go:181] (0xc000140370) Data frame received for 3\nI0204 13:48:45.672717 3071 log.go:181] (0xc0004d2780) (3) Data frame handling\nI0204 13:48:45.674713 3071 log.go:181] (0xc000140370) Data frame received for 1\nI0204 13:48:45.674727 3071 log.go:181] (0xc00054e000) (1) Data frame handling\nI0204 13:48:45.674735 3071 log.go:181] (0xc00054e000) (1) Data frame sent\nI0204 13:48:45.674743 3071 log.go:181] (0xc000140370) (0xc00054e000) Stream removed, broadcasting: 1\nI0204 13:48:45.674809 3071 log.go:181] (0xc000140370) Go away received\nI0204 13:48:45.675015 3071 log.go:181] (0xc000140370) (0xc00054e000) Stream removed, broadcasting: 1\nI0204 13:48:45.675029 3071 log.go:181] (0xc000140370) (0xc0004d2780) Stream removed, broadcasting: 3\nI0204 13:48:45.675036 3071 log.go:181] (0xc000140370) (0xc00054e0a0) Stream removed, broadcasting: 5\n" Feb 4 13:48:45.680: INFO: stdout: "" Feb 4 13:48:45.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-2445 exec execpod-affinity7cxjh -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.105.42:80/ ; done' Feb 4 13:48:45.948: INFO: stderr: "I0204 13:48:45.813830 3086 log.go:181] (0xc0001ea370) (0xc000568000) Create stream\nI0204 13:48:45.813902 3086 log.go:181] (0xc0001ea370) (0xc000568000) Stream added, broadcasting: 1\nI0204 13:48:45.815261 3086 log.go:181] (0xc0001ea370) Reply frame received for 1\nI0204 13:48:45.815287 3086 log.go:181] (0xc0001ea370) (0xc000d08460) Create stream\nI0204 13:48:45.815295 3086 log.go:181] (0xc0001ea370) (0xc000d08460) Stream added, broadcasting: 3\nI0204 13:48:45.815953 3086 log.go:181] (0xc0001ea370) Reply frame received for 3\nI0204 13:48:45.815978 3086 log.go:181] (0xc0001ea370) (0xc0009ca1e0) Create stream\nI0204 13:48:45.815999 3086 log.go:181] (0xc0001ea370) (0xc0009ca1e0) Stream added, broadcasting: 5\nI0204 13:48:45.816665 3086 log.go:181] (0xc0001ea370) Reply frame received for 5\nI0204 13:48:45.873146 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.873182 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.873197 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.873222 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.873233 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.873243 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.876076 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.876098 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.876114 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.876787 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.876807 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.876819 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.876916 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.876932 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.876944 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/I0204 13:48:45.877119 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.877135 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.877145 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n\nI0204 13:48:45.880333 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.880350 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.880366 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.881115 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.881130 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.881145 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.881221 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.881242 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.881259 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.884071 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.884086 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.884104 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.884479 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.884496 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.884510 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.884520 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.884529 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.884539 3086 log.go:181] (0xc000d08460) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.888939 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.888951 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.888958 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.889505 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.889533 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.889544 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.889574 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.889585 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.889603 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.889616 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.889622 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.889639 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.895112 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.895126 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.895138 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.895844 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.895861 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.895877 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.895897 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.895920 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.895940 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.895955 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.895968 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.895994 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.898831 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.898866 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.898893 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.899568 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.899596 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.899611 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.899620 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.899632 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.899640 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.904043 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.904071 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.904104 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.904577 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.904611 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.904626 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.904641 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.904650 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.904660 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.908085 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.908111 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.908142 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.908478 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.908508 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.908522 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.908560 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.908602 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.908625 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0204 13:48:45.908640 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.908673 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.908787 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n http://10.96.105.42:80/\nI0204 13:48:45.913712 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.913738 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.913769 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.913996 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.914019 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.914030 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.914039 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.914047 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.914064 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.914073 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.914081 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.914088 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.917184 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.917204 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.917218 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.917675 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.917708 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.917725 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.917758 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.917772 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.917792 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.917805 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.917815 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.917842 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.921410 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.921470 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.921490 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.922019 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.922045 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.922063 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.922084 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.922096 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.922109 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.922138 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.922150 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.922166 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\nI0204 13:48:45.928185 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.928210 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.928228 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.928489 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.928538 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.928559 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.928587 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.928607 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.928630 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.931973 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.931983 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.931989 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.932325 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.932333 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.932339 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.932348 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.932358 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.932366 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.935286 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.935301 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.935311 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.935941 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.935951 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.935957 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.935966 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.935975 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.935990 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.938368 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.938380 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.938392 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.938700 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.938719 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.938727 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.938738 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.938744 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.938750 3086 log.go:181] (0xc0009ca1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.105.42:80/\nI0204 13:48:45.942044 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.942064 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.942087 3086 log.go:181] (0xc000d08460) (3) Data frame sent\nI0204 13:48:45.942217 3086 log.go:181] (0xc0001ea370) Data frame received for 3\nI0204 13:48:45.942236 3086 log.go:181] (0xc000d08460) (3) Data frame handling\nI0204 13:48:45.942290 3086 log.go:181] (0xc0001ea370) Data frame received for 5\nI0204 13:48:45.942313 3086 log.go:181] (0xc0009ca1e0) (5) Data frame handling\nI0204 13:48:45.943464 3086 log.go:181] (0xc0001ea370) Data frame received for 1\nI0204 13:48:45.943482 3086 log.go:181] (0xc000568000) (1) Data frame handling\nI0204 13:48:45.943497 3086 log.go:181] (0xc000568000) (1) Data frame sent\nI0204 13:48:45.943594 3086 log.go:181] (0xc0001ea370) (0xc000568000) Stream removed, broadcasting: 1\nI0204 13:48:45.943632 3086 log.go:181] (0xc0001ea370) Go away received\nI0204 13:48:45.943854 3086 log.go:181] (0xc0001ea370) (0xc000568000) Stream removed, broadcasting: 1\nI0204 13:48:45.943866 3086 log.go:181] (0xc0001ea370) (0xc000d08460) Stream removed, broadcasting: 3\nI0204 13:48:45.943873 3086 log.go:181] (0xc0001ea370) (0xc0009ca1e0) Stream removed, broadcasting: 5\n" Feb 4 13:48:45.949: INFO: stdout: "\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw\naffinity-clusterip-dk2sw" Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Received response from host: affinity-clusterip-dk2sw Feb 4 13:48:45.949: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2445, will wait for the garbage collector to delete the pods Feb 4 13:48:48.574: INFO: Deleting ReplicationController affinity-clusterip took: 919.528207ms Feb 4 13:48:49.574: INFO: Terminating ReplicationController affinity-clusterip pods took: 1.000258201s [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:49:32.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2445" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:74.490 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":186,"skipped":3225,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:49:32.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting the auto-created API token STEP: reading a file in the container Feb 4 13:49:41.983: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9999 pod-service-account-bbccee13-4bea-4a39-9fad-756ef643f133 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 4 13:49:42.435: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9999 pod-service-account-bbccee13-4bea-4a39-9fad-756ef643f133 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 4 13:49:42.650: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9999 pod-service-account-bbccee13-4bea-4a39-9fad-756ef643f133 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:49:42.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9999" for this suite. • [SLOW TEST:10.012 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":311,"completed":187,"skipped":3246,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:49:42.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 4 13:49:43.263: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 13:49:43.570: INFO: Waiting for terminating namespaces to be deleted... Feb 4 13:49:43.611: INFO: Logging pods the apiserver thinks is on node latest-worker before test Feb 4 13:49:43.643: INFO: csi-mockplugin-0 from csi-mock-volumes-6911-5117 started at 2021-02-04 13:49:06 +0000 UTC (3 container statuses recorded) Feb 4 13:49:43.643: INFO: Container csi-provisioner ready: true, restart count 0 Feb 4 13:49:43.643: INFO: Container driver-registrar ready: true, restart count 0 Feb 4 13:49:43.643: INFO: Container mock ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pvc-volume-tester-wp5cx from csi-mock-volumes-6911 started at 2021-02-04 13:49:23 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container volume-tester ready: true, restart count 0 Feb 4 13:49:43.643: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container chaos-mesh ready: true, restart count 0 Feb 4 13:49:43.643: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 13:49:43.643: INFO: coredns-74ff55c5b-zzl9d from kube-system started at 2021-02-04 13:09:59 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container coredns ready: true, restart count 0 Feb 4 13:49:43.643: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 13:49:43.643: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-044a87c6-6f34-446d-910e-7623cad2a8f6 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:49:43.643: INFO: pod-08c9721f-6b85-4ed0-bf24-e48d9d8b1852 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:49:43.643: INFO: pod-0eac2bed-0b01-41e3-a44e-c337d9416701 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-163d9bcf-d0e6-4e86-b845-1cbfc9ea2702 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:49:43.643: INFO: pod-19d0a4fe-ce47-49b0-b72e-b5ea93bb345f from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:49:43.643: INFO: pod-45b0f00e-3ecf-4278-9b96-c0afe503d0bc from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:47 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:49:43.643: INFO: pod-49ddf632-1fff-41ca-a8d2-a51ae250850b from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:49:43.643: INFO: pod-5fb453c4-392d-44f4-9798-4bc5292fec2a from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-613c95e7-acc9-4d59-8f5d-bcf37e372793 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: false, restart count 0 Feb 4 13:49:43.643: INFO: pod-63caee16-f1dd-4419-ba35-526fcfe27a62 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-66683898-311f-4683-b6fa-36017d37633d from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-6a8ee4fb-2f53-4ce9-88bf-57e8a6c3d33f from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-6dedc8aa-c96e-4474-a3c9-94fbfb845812 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:47 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-723a8637-1370-4e30-94b9-b000f5d9ac74 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-79043c9e-b170-4756-a021-eb61853cb837 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-7b4c8daf-cbab-4df2-b12f-1cc0d67453ee from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-80540d4d-60de-4eeb-b8ea-b56bb36ebdb2 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-89aecb30-ffbe-439c-b4ee-898e93868e02 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-8a6fe201-ecb1-4206-b820-1b14c8c092ac from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:47 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-8ddd5e14-b87e-4558-a251-43d6b33b3a69 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-8e68665c-719c-4e8a-8808-5d8002c819c8 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-8fe90410-cba8-4652-a759-a9c7094c3e8d from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.643: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.643: INFO: pod-920a6339-8825-4981-b89a-824d9dac1a52 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:47 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-933fdb61-1845-4b3f-ab56-bcf9e9e85ac6 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:51 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-960ff860-bab8-4d17-9539-0c13de0d0b2f from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:51 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-98a73fe2-ced6-4eeb-af39-6e4b03b10e3a from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-996127f1-534d-4c82-a8e8-e25843db4e1d from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-9b000ca4-ff56-4518-b8e2-45fdee7c4ad4 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-9ec90b65-44c6-400a-8235-1b095e39d63b from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-a1f93917-8d41-43dd-9cc7-9f26f20facb9 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-a61ce55e-50dc-4fe4-a28e-722f8e760723 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-b8bc443f-f039-442a-981f-7e460d218f76 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:51 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-c0c263be-a9df-4568-97d1-fe40bd5ea579 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:47 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-c18cfc27-5d20-4f99-8eb3-edbfd80b5c30 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-c1e62f70-ac91-4ae7-9c0f-97122c7f1242 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:47 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-cced118f-ac3d-4b87-9401-8e18ce4aafb6 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-d71c9ef2-15e1-4748-be98-54a614eba914 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-da879d63-22cd-4777-a8a7-f5d53bc4a3c8 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:47 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-db859a89-4a75-472b-b77c-23254883e0e1 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:50 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-e2a8d55a-663a-4cdc-aa88-c1ca45d9a5d4 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-e2f6a543-defd-4b5e-b6ce-2ec22baeb0f1 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-e4359c7a-7395-40b5-a24f-d502d6382f31 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-e9bd269f-4662-461f-b7b5-fde74b71043c from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-f1f8b6d1-15ac-4479-bfce-8d2cf396eec8 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:50 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-f451155e-a2ad-4e7a-b9fd-dc5964349824 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-f5dd4169-154d-446e-a162-e26f25a4c9c7 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-f6f7532c-2e78-44e3-8212-15e236ad24db from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:49 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-f86548f1-34d9-4535-8d60-837490fe639f from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-f940e3c2-e9c8-4c5b-a10f-542afc9c7ef8 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:52 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: pod-fead898c-24a9-4126-b376-59cc0b648493 from persistent-local-volumes-test-2330 started at 2021-02-04 13:47:48 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.644: INFO: Container write-pod ready: true, restart count 0 Feb 4 13:49:43.644: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Feb 4 13:49:43.669: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.669: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 13:49:43.669: INFO: coredns-74ff55c5b-674bk from kube-system started at 2021-02-04 13:09:59 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.669: INFO: Container coredns ready: true, restart count 0 Feb 4 13:49:43.669: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.669: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 13:49:43.669: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.669: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 13:49:43.669: INFO: pod-service-account-bbccee13-4bea-4a39-9fad-756ef643f133 from svcaccounts-9999 started at 2021-02-04 13:49:33 +0000 UTC (1 container statuses recorded) Feb 4 13:49:43.669: INFO: Container test ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-86b8203b-5310-400a-a55d-80cd616a5470 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 172.18.0.16 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-86b8203b-5310-400a-a55d-80cd616a5470 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-86b8203b-5310-400a-a55d-80cd616a5470 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:54:57.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9998" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:314.724 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":311,"completed":188,"skipped":3266,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:54:57.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:129 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:54:57.749: INFO: Create a RollingUpdate DaemonSet Feb 4 13:54:57.753: INFO: Check that daemon pods launch on every node of the cluster Feb 4 13:54:57.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:54:57.798: INFO: Number of nodes with available pods: 0 Feb 4 13:54:57.798: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:54:58.998: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:54:59.001: INFO: Number of nodes with available pods: 0 Feb 4 13:54:59.001: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:54:59.802: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:54:59.806: INFO: Number of nodes with available pods: 0 Feb 4 13:54:59.806: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:55:00.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:00.877: INFO: Number of nodes with available pods: 0 Feb 4 13:55:00.877: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:55:01.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:01.839: INFO: Number of nodes with available pods: 0 Feb 4 13:55:01.839: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:55:02.809: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:02.829: INFO: Number of nodes with available pods: 1 Feb 4 13:55:02.829: INFO: Node latest-worker is running more than one daemon pod Feb 4 13:55:03.803: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:03.805: INFO: Number of nodes with available pods: 2 Feb 4 13:55:03.805: INFO: Number of running nodes: 2, number of available pods: 2 Feb 4 13:55:03.805: INFO: Update the DaemonSet to trigger a rollout Feb 4 13:55:03.892: INFO: Updating DaemonSet daemon-set Feb 4 13:55:40.970: INFO: Roll back the DaemonSet before rollout is complete Feb 4 13:55:40.979: INFO: Updating DaemonSet daemon-set Feb 4 13:55:40.979: INFO: Make sure DaemonSet rollback is complete Feb 4 13:55:41.026: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:41.026: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:41.076: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:42.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:42.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:42.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:43.110: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:43.110: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:43.114: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:44.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:44.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:44.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:45.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:45.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:45.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:46.085: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:46.085: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:46.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:47.092: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:47.093: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:47.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:48.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:48.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:48.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:49.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:49.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:49.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:50.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:50.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:50.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:51.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:51.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:51.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:52.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:52.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:52.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:53.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:53.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:53.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:54.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:54.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:54.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:55.093: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:55.093: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:55.097: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:56.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:56.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:56.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:57.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:57.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:57.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:58.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:58.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:58.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:55:59.105: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:55:59.105: INFO: Pod daemon-set-49j94 is not available Feb 4 13:55:59.109: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:00.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:00.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:00.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:01.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:01.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:01.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:02.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:02.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:02.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:03.084: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:03.084: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:03.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:04.092: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:04.092: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:04.096: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:05.518: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:05.518: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:05.522: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:06.084: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:06.084: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:06.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:07.116: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:07.116: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:07.121: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:08.090: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:08.090: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:08.094: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:09.097: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:09.097: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:09.101: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:10.364: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:10.364: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:10.368: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:11.106: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:11.106: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:11.120: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:12.079: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:12.079: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:12.083: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:13.112: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:13.112: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:13.118: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:14.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:14.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:14.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:15.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:15.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:15.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:16.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:16.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:16.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:17.083: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:17.083: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:17.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:18.079: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:18.079: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:18.088: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:19.101: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:19.101: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:19.123: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:20.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:20.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:20.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:21.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:21.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:21.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:22.282: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:22.282: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:22.454: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:23.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:23.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:23.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:24.189: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:24.189: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:24.267: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:25.165: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:25.165: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:25.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:26.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:26.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:26.083: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:27.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:27.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:27.105: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:28.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:28.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:28.083: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:29.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:29.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:29.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:30.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:30.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:30.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:31.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:31.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:31.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:32.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:32.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:32.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:33.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:33.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:33.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:34.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:34.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:34.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:35.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:35.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:35.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:36.082: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:36.082: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:36.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:37.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:37.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:37.087: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:38.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:38.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:38.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:39.080: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:39.080: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:39.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:40.081: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:40.081: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:40.086: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:41.111: INFO: Wrong image for pod: daemon-set-49j94. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 4 13:56:41.111: INFO: Pod daemon-set-49j94 is not available Feb 4 13:56:41.358: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Feb 4 13:56:42.081: INFO: Pod daemon-set-5mrtc is not available Feb 4 13:56:42.084: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:95 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2307, will wait for the garbage collector to delete the pods Feb 4 13:56:42.166: INFO: Deleting DaemonSet.extensions daemon-set took: 6.98137ms Feb 4 13:56:42.767: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.23386ms Feb 4 13:57:41.174: INFO: Number of nodes with available pods: 0 Feb 4 13:57:41.174: INFO: Number of running nodes: 0, number of available pods: 0 Feb 4 13:57:41.206: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"2101564"},"items":null} Feb 4 13:57:41.210: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2101564"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:57:41.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2307" for this suite. • [SLOW TEST:163.644 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":311,"completed":189,"skipped":3267,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:57:41.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 13:57:42.000: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Feb 4 13:57:44.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 13:57:46.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748043862, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 13:57:49.191: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:57:49.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-879-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:57:50.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4263" for this suite. STEP: Destroying namespace "webhook-4263-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.405 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":311,"completed":190,"skipped":3283,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:57:50.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-1ae5c00f-f8dc-428c-96ad-d91616aa3cc0 STEP: Creating a pod to test consume secrets Feb 4 13:57:50.734: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e" in namespace "projected-4822" to be "Succeeded or Failed" Feb 4 13:57:50.772: INFO: Pod "pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e": Phase="Pending", Reason="", readiness=false. Elapsed: 37.755237ms Feb 4 13:57:52.793: INFO: Pod "pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058914265s Feb 4 13:57:54.797: INFO: Pod "pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e": Phase="Running", Reason="", readiness=true. Elapsed: 4.062702218s Feb 4 13:57:56.804: INFO: Pod "pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069484335s STEP: Saw pod success Feb 4 13:57:56.804: INFO: Pod "pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e" satisfied condition "Succeeded or Failed" Feb 4 13:57:56.807: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e container projected-secret-volume-test: STEP: delete the pod Feb 4 13:57:56.857: INFO: Waiting for pod pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e to disappear Feb 4 13:57:56.871: INFO: Pod pod-projected-secrets-cf3de9e5-97cb-4cc5-92f2-e78ab6a1c84e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:57:56.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4822" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":191,"skipped":3283,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:57:56.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 4 13:58:01.373: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:58:01.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3008" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":311,"completed":192,"skipped":3300,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:58:01.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name cm-test-opt-del-a9c21545-5929-4399-84dd-e22cf406a9d3 STEP: Creating configMap with name cm-test-opt-upd-773524b1-3d60-4a2b-8972-99c4e7087e56 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a9c21545-5929-4399-84dd-e22cf406a9d3 STEP: Updating configmap cm-test-opt-upd-773524b1-3d60-4a2b-8972-99c4e7087e56 STEP: Creating configMap with name cm-test-opt-create-53e5f644-916a-40f9-aaa4-29acf83fd4f1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:59:34.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5036" for this suite. • [SLOW TEST:92.818 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":193,"skipped":3322,"failed":0} SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:59:34.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 13:59:34.668: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6197 I0204 13:59:34.714690 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6197, replica count: 1 I0204 13:59:35.765178 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:59:36.765484 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:59:37.765771 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 13:59:38.765977 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 13:59:38.904: INFO: Created: latency-svc-vcvvr Feb 4 13:59:38.923: INFO: Got endpoints: latency-svc-vcvvr [57.256482ms] Feb 4 13:59:39.017: INFO: Created: latency-svc-dmzq8 Feb 4 13:59:39.048: INFO: Got endpoints: latency-svc-dmzq8 [124.51357ms] Feb 4 13:59:39.048: INFO: Created: latency-svc-hk2rf Feb 4 13:59:39.066: INFO: Got endpoints: latency-svc-hk2rf [142.659981ms] Feb 4 13:59:39.084: INFO: Created: latency-svc-ff9xr Feb 4 13:59:39.166: INFO: Got endpoints: latency-svc-ff9xr [243.218002ms] Feb 4 13:59:39.179: INFO: Created: latency-svc-8b2hs Feb 4 13:59:39.204: INFO: Got endpoints: latency-svc-8b2hs [280.997084ms] Feb 4 13:59:39.246: INFO: Created: latency-svc-78zck Feb 4 13:59:39.254: INFO: Got endpoints: latency-svc-78zck [330.728128ms] Feb 4 13:59:39.310: INFO: Created: latency-svc-nv26f Feb 4 13:59:39.372: INFO: Got endpoints: latency-svc-nv26f [448.844896ms] Feb 4 13:59:39.373: INFO: Created: latency-svc-h6fnp Feb 4 13:59:39.408: INFO: Got endpoints: latency-svc-h6fnp [484.396754ms] Feb 4 13:59:39.467: INFO: Created: latency-svc-bd68l Feb 4 13:59:39.487: INFO: Created: latency-svc-4zv4n Feb 4 13:59:39.487: INFO: Got endpoints: latency-svc-bd68l [563.381818ms] Feb 4 13:59:39.510: INFO: Got endpoints: latency-svc-4zv4n [587.001187ms] Feb 4 13:59:39.540: INFO: Created: latency-svc-2dldn Feb 4 13:59:39.597: INFO: Got endpoints: latency-svc-2dldn [674.017221ms] Feb 4 13:59:39.618: INFO: Created: latency-svc-p4hxj Feb 4 13:59:39.635: INFO: Got endpoints: latency-svc-p4hxj [712.060862ms] Feb 4 13:59:39.696: INFO: Created: latency-svc-kcwqp Feb 4 13:59:39.747: INFO: Got endpoints: latency-svc-kcwqp [823.668824ms] Feb 4 13:59:39.762: INFO: Created: latency-svc-rt5lm Feb 4 13:59:39.785: INFO: Got endpoints: latency-svc-rt5lm [861.977056ms] Feb 4 13:59:39.828: INFO: Created: latency-svc-kt5jz Feb 4 13:59:39.846: INFO: Got endpoints: latency-svc-kt5jz [922.641361ms] Feb 4 13:59:39.895: INFO: Created: latency-svc-4hj45 Feb 4 13:59:39.917: INFO: Got endpoints: latency-svc-4hj45 [994.033731ms] Feb 4 13:59:40.212: INFO: Created: latency-svc-6l825 Feb 4 13:59:40.394: INFO: Got endpoints: latency-svc-6l825 [1.346068678s] Feb 4 13:59:40.397: INFO: Created: latency-svc-fgv95 Feb 4 13:59:40.429: INFO: Got endpoints: latency-svc-fgv95 [1.362843042s] Feb 4 13:59:40.633: INFO: Created: latency-svc-dtlb2 Feb 4 13:59:40.734: INFO: Got endpoints: latency-svc-dtlb2 [1.567674348s] Feb 4 13:59:40.800: INFO: Created: latency-svc-lw8fm Feb 4 13:59:40.849: INFO: Got endpoints: latency-svc-lw8fm [1.644539896s] Feb 4 13:59:41.046: INFO: Created: latency-svc-pz8dp Feb 4 13:59:41.075: INFO: Got endpoints: latency-svc-pz8dp [1.821142982s] Feb 4 13:59:41.196: INFO: Created: latency-svc-lcbwr Feb 4 13:59:41.207: INFO: Got endpoints: latency-svc-lcbwr [1.834757901s] Feb 4 13:59:41.238: INFO: Created: latency-svc-9w7r9 Feb 4 13:59:41.253: INFO: Got endpoints: latency-svc-9w7r9 [1.845058136s] Feb 4 13:59:41.317: INFO: Created: latency-svc-kx9rc Feb 4 13:59:41.336: INFO: Got endpoints: latency-svc-kx9rc [1.849441401s] Feb 4 13:59:41.382: INFO: Created: latency-svc-7xcwc Feb 4 13:59:41.563: INFO: Got endpoints: latency-svc-7xcwc [2.052403438s] Feb 4 13:59:41.684: INFO: Created: latency-svc-pvdd5 Feb 4 13:59:41.748: INFO: Got endpoints: latency-svc-pvdd5 [2.151190955s] Feb 4 13:59:41.864: INFO: Created: latency-svc-4qrch Feb 4 13:59:41.900: INFO: Got endpoints: latency-svc-4qrch [2.264354169s] Feb 4 13:59:41.941: INFO: Created: latency-svc-pw8g9 Feb 4 13:59:41.987: INFO: Got endpoints: latency-svc-pw8g9 [2.239916749s] Feb 4 13:59:42.043: INFO: Created: latency-svc-2bpqj Feb 4 13:59:42.051: INFO: Got endpoints: latency-svc-2bpqj [2.265514839s] Feb 4 13:59:42.067: INFO: Created: latency-svc-zkqf6 Feb 4 13:59:42.075: INFO: Got endpoints: latency-svc-zkqf6 [2.229257305s] Feb 4 13:59:42.139: INFO: Created: latency-svc-dxtz6 Feb 4 13:59:42.154: INFO: Got endpoints: latency-svc-dxtz6 [2.236583537s] Feb 4 13:59:42.207: INFO: Created: latency-svc-2hkkl Feb 4 13:59:42.219: INFO: Got endpoints: latency-svc-2hkkl [1.825176319s] Feb 4 13:59:42.268: INFO: Created: latency-svc-j6zdv Feb 4 13:59:42.274: INFO: Got endpoints: latency-svc-j6zdv [1.845020953s] Feb 4 13:59:42.313: INFO: Created: latency-svc-mqbvv Feb 4 13:59:42.355: INFO: Got endpoints: latency-svc-mqbvv [1.620920247s] Feb 4 13:59:42.454: INFO: Created: latency-svc-vb425 Feb 4 13:59:42.488: INFO: Got endpoints: latency-svc-vb425 [1.638665051s] Feb 4 13:59:42.488: INFO: Created: latency-svc-qkbcn Feb 4 13:59:42.529: INFO: Got endpoints: latency-svc-qkbcn [1.454189672s] Feb 4 13:59:42.603: INFO: Created: latency-svc-kwmt8 Feb 4 13:59:42.613: INFO: Got endpoints: latency-svc-kwmt8 [1.405997881s] Feb 4 13:59:42.638: INFO: Created: latency-svc-kz4wd Feb 4 13:59:42.655: INFO: Got endpoints: latency-svc-kz4wd [1.401805761s] Feb 4 13:59:42.692: INFO: Created: latency-svc-jbpfj Feb 4 13:59:42.738: INFO: Got endpoints: latency-svc-jbpfj [1.401859376s] Feb 4 13:59:42.763: INFO: Created: latency-svc-9nk4t Feb 4 13:59:42.793: INFO: Got endpoints: latency-svc-9nk4t [1.230546223s] Feb 4 13:59:42.829: INFO: Created: latency-svc-tj2t4 Feb 4 13:59:42.909: INFO: Got endpoints: latency-svc-tj2t4 [1.160905929s] Feb 4 13:59:42.925: INFO: Created: latency-svc-wt47s Feb 4 13:59:42.945: INFO: Got endpoints: latency-svc-wt47s [1.045007769s] Feb 4 13:59:42.968: INFO: Created: latency-svc-xbxjl Feb 4 13:59:42.986: INFO: Got endpoints: latency-svc-xbxjl [999.265594ms] Feb 4 13:59:43.041: INFO: Created: latency-svc-hwsr4 Feb 4 13:59:43.069: INFO: Got endpoints: latency-svc-hwsr4 [1.018409236s] Feb 4 13:59:43.071: INFO: Created: latency-svc-b6p96 Feb 4 13:59:43.100: INFO: Got endpoints: latency-svc-b6p96 [1.024305961s] Feb 4 13:59:43.117: INFO: Created: latency-svc-cvfcv Feb 4 13:59:43.128: INFO: Got endpoints: latency-svc-cvfcv [973.972121ms] Feb 4 13:59:43.185: INFO: Created: latency-svc-9xkl6 Feb 4 13:59:43.243: INFO: Got endpoints: latency-svc-9xkl6 [1.02440731s] Feb 4 13:59:43.245: INFO: Created: latency-svc-kvdzk Feb 4 13:59:43.279: INFO: Got endpoints: latency-svc-kvdzk [1.005332058s] Feb 4 13:59:43.339: INFO: Created: latency-svc-qq8fg Feb 4 13:59:43.356: INFO: Got endpoints: latency-svc-qq8fg [1.000908171s] Feb 4 13:59:43.394: INFO: Created: latency-svc-6gdx7 Feb 4 13:59:43.496: INFO: Got endpoints: latency-svc-6gdx7 [1.008688229s] Feb 4 13:59:43.544: INFO: Created: latency-svc-vl5dt Feb 4 13:59:43.562: INFO: Got endpoints: latency-svc-vl5dt [1.032021694s] Feb 4 13:59:43.586: INFO: Created: latency-svc-wxh7l Feb 4 13:59:43.639: INFO: Got endpoints: latency-svc-wxh7l [1.026443241s] Feb 4 13:59:43.657: INFO: Created: latency-svc-htmwk Feb 4 13:59:43.676: INFO: Got endpoints: latency-svc-htmwk [1.0211479s] Feb 4 13:59:43.693: INFO: Created: latency-svc-cmcp9 Feb 4 13:59:43.717: INFO: Got endpoints: latency-svc-cmcp9 [979.037215ms] Feb 4 13:59:43.796: INFO: Created: latency-svc-jpljh Feb 4 13:59:43.813: INFO: Got endpoints: latency-svc-jpljh [1.019612272s] Feb 4 13:59:43.850: INFO: Created: latency-svc-7b6v7 Feb 4 13:59:43.858: INFO: Got endpoints: latency-svc-7b6v7 [949.025574ms] Feb 4 13:59:43.951: INFO: Created: latency-svc-t8nnv Feb 4 13:59:43.993: INFO: Got endpoints: latency-svc-t8nnv [1.048083134s] Feb 4 13:59:43.994: INFO: Created: latency-svc-vpzdz Feb 4 13:59:44.035: INFO: Got endpoints: latency-svc-vpzdz [1.048479491s] Feb 4 13:59:44.113: INFO: Created: latency-svc-vkwf6 Feb 4 13:59:44.129: INFO: Got endpoints: latency-svc-vkwf6 [1.059136889s] Feb 4 13:59:44.149: INFO: Created: latency-svc-m8z2w Feb 4 13:59:44.165: INFO: Got endpoints: latency-svc-m8z2w [1.064834839s] Feb 4 13:59:44.185: INFO: Created: latency-svc-k47tw Feb 4 13:59:44.232: INFO: Got endpoints: latency-svc-k47tw [1.103999207s] Feb 4 13:59:44.264: INFO: Created: latency-svc-dkjkw Feb 4 13:59:44.281: INFO: Got endpoints: latency-svc-dkjkw [1.037398279s] Feb 4 13:59:44.330: INFO: Created: latency-svc-rfzcw Feb 4 13:59:44.382: INFO: Got endpoints: latency-svc-rfzcw [1.102267607s] Feb 4 13:59:44.401: INFO: Created: latency-svc-7wkv8 Feb 4 13:59:44.413: INFO: Got endpoints: latency-svc-7wkv8 [1.056664821s] Feb 4 13:59:44.514: INFO: Created: latency-svc-gn465 Feb 4 13:59:44.539: INFO: Got endpoints: latency-svc-gn465 [1.042195915s] Feb 4 13:59:44.540: INFO: Created: latency-svc-nrtx2 Feb 4 13:59:44.571: INFO: Got endpoints: latency-svc-nrtx2 [1.009478802s] Feb 4 13:59:44.645: INFO: Created: latency-svc-vbs25 Feb 4 13:59:44.662: INFO: Got endpoints: latency-svc-vbs25 [1.022632501s] Feb 4 13:59:44.677: INFO: Created: latency-svc-dmqc2 Feb 4 13:59:44.725: INFO: Got endpoints: latency-svc-dmqc2 [1.049457547s] Feb 4 13:59:44.778: INFO: Created: latency-svc-8s5v9 Feb 4 13:59:44.788: INFO: Got endpoints: latency-svc-8s5v9 [1.070471276s] Feb 4 13:59:44.821: INFO: Created: latency-svc-lfvpt Feb 4 13:59:44.848: INFO: Got endpoints: latency-svc-lfvpt [1.03472507s] Feb 4 13:59:44.933: INFO: Created: latency-svc-thtkf Feb 4 13:59:44.971: INFO: Got endpoints: latency-svc-thtkf [1.112889688s] Feb 4 13:59:44.972: INFO: Created: latency-svc-ls2w9 Feb 4 13:59:45.012: INFO: Got endpoints: latency-svc-ls2w9 [1.019381643s] Feb 4 13:59:45.070: INFO: Created: latency-svc-hlw5n Feb 4 13:59:45.086: INFO: Got endpoints: latency-svc-hlw5n [1.050632725s] Feb 4 13:59:45.109: INFO: Created: latency-svc-mw2pd Feb 4 13:59:45.125: INFO: Got endpoints: latency-svc-mw2pd [996.456309ms] Feb 4 13:59:45.169: INFO: Created: latency-svc-jq9dg Feb 4 13:59:45.245: INFO: Got endpoints: latency-svc-jq9dg [1.080145206s] Feb 4 13:59:45.247: INFO: Created: latency-svc-xzlpc Feb 4 13:59:45.257: INFO: Got endpoints: latency-svc-xzlpc [1.024795824s] Feb 4 13:59:45.278: INFO: Created: latency-svc-sbhd9 Feb 4 13:59:45.287: INFO: Got endpoints: latency-svc-sbhd9 [1.006209895s] Feb 4 13:59:45.307: INFO: Created: latency-svc-mvznd Feb 4 13:59:45.323: INFO: Got endpoints: latency-svc-mvznd [941.415388ms] Feb 4 13:59:45.377: INFO: Created: latency-svc-56dw7 Feb 4 13:59:45.399: INFO: Got endpoints: latency-svc-56dw7 [986.30222ms] Feb 4 13:59:45.527: INFO: Created: latency-svc-mzlcc Feb 4 13:59:45.559: INFO: Got endpoints: latency-svc-mzlcc [1.020492114s] Feb 4 13:59:45.560: INFO: Created: latency-svc-vx85j Feb 4 13:59:45.595: INFO: Got endpoints: latency-svc-vx85j [1.023699563s] Feb 4 13:59:45.669: INFO: Created: latency-svc-gkkbz Feb 4 13:59:45.697: INFO: Got endpoints: latency-svc-gkkbz [1.035033168s] Feb 4 13:59:45.733: INFO: Created: latency-svc-sgqph Feb 4 13:59:45.753: INFO: Got endpoints: latency-svc-sgqph [1.027374328s] Feb 4 13:59:45.820: INFO: Created: latency-svc-zvzz7 Feb 4 13:59:45.853: INFO: Got endpoints: latency-svc-zvzz7 [1.064996748s] Feb 4 13:59:45.854: INFO: Created: latency-svc-45jpw Feb 4 13:59:45.889: INFO: Got endpoints: latency-svc-45jpw [1.040744061s] Feb 4 13:59:45.945: INFO: Created: latency-svc-wvmsk Feb 4 13:59:45.979: INFO: Created: latency-svc-l5j7v Feb 4 13:59:45.980: INFO: Got endpoints: latency-svc-wvmsk [1.008157661s] Feb 4 13:59:45.995: INFO: Got endpoints: latency-svc-l5j7v [982.198069ms] Feb 4 13:59:46.090: INFO: Created: latency-svc-s477f Feb 4 13:59:46.124: INFO: Created: latency-svc-bdksl Feb 4 13:59:46.124: INFO: Got endpoints: latency-svc-s477f [1.038626317s] Feb 4 13:59:46.147: INFO: Got endpoints: latency-svc-bdksl [1.02130843s] Feb 4 13:59:46.240: INFO: Created: latency-svc-qnwmr Feb 4 13:59:46.251: INFO: Got endpoints: latency-svc-qnwmr [1.006364739s] Feb 4 13:59:46.267: INFO: Created: latency-svc-tj5mj Feb 4 13:59:46.280: INFO: Got endpoints: latency-svc-tj5mj [1.022949773s] Feb 4 13:59:46.333: INFO: Created: latency-svc-d55fm Feb 4 13:59:46.394: INFO: Got endpoints: latency-svc-d55fm [1.106623257s] Feb 4 13:59:46.442: INFO: Created: latency-svc-x22nk Feb 4 13:59:46.462: INFO: Got endpoints: latency-svc-x22nk [1.139085682s] Feb 4 13:59:46.520: INFO: Created: latency-svc-zcfl5 Feb 4 13:59:46.538: INFO: Got endpoints: latency-svc-zcfl5 [1.138360969s] Feb 4 13:59:46.615: INFO: Created: latency-svc-6659x Feb 4 13:59:46.660: INFO: Got endpoints: latency-svc-6659x [1.100278899s] Feb 4 13:59:46.686: INFO: Created: latency-svc-stdmf Feb 4 13:59:46.695: INFO: Got endpoints: latency-svc-stdmf [1.100317723s] Feb 4 13:59:46.729: INFO: Created: latency-svc-mnz6n Feb 4 13:59:46.743: INFO: Got endpoints: latency-svc-mnz6n [1.045968504s] Feb 4 13:59:46.795: INFO: Created: latency-svc-blbd7 Feb 4 13:59:46.825: INFO: Got endpoints: latency-svc-blbd7 [1.072183243s] Feb 4 13:59:46.825: INFO: Created: latency-svc-2v5rq Feb 4 13:59:46.855: INFO: Got endpoints: latency-svc-2v5rq [1.002463723s] Feb 4 13:59:46.873: INFO: Created: latency-svc-6lrtp Feb 4 13:59:46.957: INFO: Got endpoints: latency-svc-6lrtp [1.068136171s] Feb 4 13:59:46.981: INFO: Created: latency-svc-hsz9f Feb 4 13:59:47.002: INFO: Got endpoints: latency-svc-hsz9f [1.022151811s] Feb 4 13:59:47.041: INFO: Created: latency-svc-fgfbv Feb 4 13:59:47.088: INFO: Got endpoints: latency-svc-fgfbv [1.093054788s] Feb 4 13:59:47.100: INFO: Created: latency-svc-76l5m Feb 4 13:59:47.118: INFO: Got endpoints: latency-svc-76l5m [994.024689ms] Feb 4 13:59:47.186: INFO: Created: latency-svc-zksj9 Feb 4 13:59:47.238: INFO: Got endpoints: latency-svc-zksj9 [1.091549114s] Feb 4 13:59:47.241: INFO: Created: latency-svc-ntqm7 Feb 4 13:59:47.257: INFO: Got endpoints: latency-svc-ntqm7 [1.005211809s] Feb 4 13:59:47.287: INFO: Created: latency-svc-mmznd Feb 4 13:59:47.305: INFO: Got endpoints: latency-svc-mmznd [1.024970053s] Feb 4 13:59:47.376: INFO: Created: latency-svc-cq7q5 Feb 4 13:59:47.403: INFO: Got endpoints: latency-svc-cq7q5 [1.009222549s] Feb 4 13:59:47.404: INFO: Created: latency-svc-gt4r8 Feb 4 13:59:47.430: INFO: Got endpoints: latency-svc-gt4r8 [967.993493ms] Feb 4 13:59:47.532: INFO: Created: latency-svc-qvw6z Feb 4 13:59:47.557: INFO: Got endpoints: latency-svc-qvw6z [1.018996779s] Feb 4 13:59:47.557: INFO: Created: latency-svc-hhxzg Feb 4 13:59:47.611: INFO: Got endpoints: latency-svc-hhxzg [951.071688ms] Feb 4 13:59:47.700: INFO: Created: latency-svc-zvnp7 Feb 4 13:59:47.718: INFO: Got endpoints: latency-svc-zvnp7 [1.022599149s] Feb 4 13:59:47.719: INFO: Created: latency-svc-tq9tw Feb 4 13:59:47.754: INFO: Got endpoints: latency-svc-tq9tw [1.010981074s] Feb 4 13:59:47.850: INFO: Created: latency-svc-nrt66 Feb 4 13:59:47.875: INFO: Created: latency-svc-jh4qt Feb 4 13:59:47.876: INFO: Got endpoints: latency-svc-nrt66 [1.050476828s] Feb 4 13:59:47.905: INFO: Got endpoints: latency-svc-jh4qt [1.049380247s] Feb 4 13:59:47.929: INFO: Created: latency-svc-9kdch Feb 4 13:59:47.993: INFO: Got endpoints: latency-svc-9kdch [1.035555831s] Feb 4 13:59:48.025: INFO: Created: latency-svc-t4pwn Feb 4 13:59:48.042: INFO: Got endpoints: latency-svc-t4pwn [1.039788982s] Feb 4 13:59:48.090: INFO: Created: latency-svc-vnmlq Feb 4 13:59:48.137: INFO: Got endpoints: latency-svc-vnmlq [1.048974703s] Feb 4 13:59:48.138: INFO: Created: latency-svc-d6zhj Feb 4 13:59:48.162: INFO: Got endpoints: latency-svc-d6zhj [1.043346852s] Feb 4 13:59:48.235: INFO: Created: latency-svc-ffm55 Feb 4 13:59:48.280: INFO: Got endpoints: latency-svc-ffm55 [1.042081984s] Feb 4 13:59:48.307: INFO: Created: latency-svc-rjg7x Feb 4 13:59:48.325: INFO: Got endpoints: latency-svc-rjg7x [1.068802048s] Feb 4 13:59:48.348: INFO: Created: latency-svc-znjdg Feb 4 13:59:48.472: INFO: Got endpoints: latency-svc-znjdg [1.166788661s] Feb 4 13:59:48.474: INFO: Created: latency-svc-z4br5 Feb 4 13:59:48.487: INFO: Got endpoints: latency-svc-z4br5 [1.08362601s] Feb 4 13:59:48.516: INFO: Created: latency-svc-g8jpb Feb 4 13:59:48.529: INFO: Got endpoints: latency-svc-g8jpb [1.098873888s] Feb 4 13:59:48.571: INFO: Created: latency-svc-wdfc6 Feb 4 13:59:48.615: INFO: Got endpoints: latency-svc-wdfc6 [1.058237475s] Feb 4 13:59:48.622: INFO: Created: latency-svc-m98b7 Feb 4 13:59:48.641: INFO: Got endpoints: latency-svc-m98b7 [1.030189572s] Feb 4 13:59:48.685: INFO: Created: latency-svc-94t9v Feb 4 13:59:48.759: INFO: Got endpoints: latency-svc-94t9v [144.171803ms] Feb 4 13:59:48.787: INFO: Created: latency-svc-q57ms Feb 4 13:59:48.822: INFO: Got endpoints: latency-svc-q57ms [1.103755877s] Feb 4 13:59:48.846: INFO: Created: latency-svc-z4t7p Feb 4 13:59:48.890: INFO: Got endpoints: latency-svc-z4t7p [1.13609178s] Feb 4 13:59:48.937: INFO: Created: latency-svc-5ngb4 Feb 4 13:59:48.953: INFO: Got endpoints: latency-svc-5ngb4 [1.077115637s] Feb 4 13:59:48.991: INFO: Created: latency-svc-fkh8g Feb 4 13:59:49.052: INFO: Got endpoints: latency-svc-fkh8g [1.147213024s] Feb 4 13:59:49.075: INFO: Created: latency-svc-gjghc Feb 4 13:59:49.092: INFO: Got endpoints: latency-svc-gjghc [1.099640228s] Feb 4 13:59:49.185: INFO: Created: latency-svc-42s2t Feb 4 13:59:49.206: INFO: Got endpoints: latency-svc-42s2t [1.164567376s] Feb 4 13:59:49.207: INFO: Created: latency-svc-npmzz Feb 4 13:59:49.242: INFO: Got endpoints: latency-svc-npmzz [1.104927724s] Feb 4 13:59:49.279: INFO: Created: latency-svc-xl6jh Feb 4 13:59:49.309: INFO: Got endpoints: latency-svc-xl6jh [1.147645378s] Feb 4 13:59:49.326: INFO: Created: latency-svc-4l6hc Feb 4 13:59:49.344: INFO: Got endpoints: latency-svc-4l6hc [1.063463045s] Feb 4 13:59:49.375: INFO: Created: latency-svc-v5jrq Feb 4 13:59:49.384: INFO: Got endpoints: latency-svc-v5jrq [1.057993224s] Feb 4 13:59:49.405: INFO: Created: latency-svc-rvdjx Feb 4 13:59:49.454: INFO: Got endpoints: latency-svc-rvdjx [982.078302ms] Feb 4 13:59:49.489: INFO: Created: latency-svc-68njf Feb 4 13:59:49.503: INFO: Got endpoints: latency-svc-68njf [1.0165301s] Feb 4 13:59:49.536: INFO: Created: latency-svc-nvw2p Feb 4 13:59:49.580: INFO: Got endpoints: latency-svc-nvw2p [1.050453234s] Feb 4 13:59:49.609: INFO: Created: latency-svc-8s8rf Feb 4 13:59:49.618: INFO: Got endpoints: latency-svc-8s8rf [976.860241ms] Feb 4 13:59:49.638: INFO: Created: latency-svc-hkzl5 Feb 4 13:59:49.813: INFO: Got endpoints: latency-svc-hkzl5 [1.054044076s] Feb 4 13:59:49.815: INFO: Created: latency-svc-cvj6v Feb 4 13:59:49.823: INFO: Got endpoints: latency-svc-cvj6v [1.000858878s] Feb 4 13:59:49.842: INFO: Created: latency-svc-j62pw Feb 4 13:59:49.860: INFO: Got endpoints: latency-svc-j62pw [969.215373ms] Feb 4 13:59:49.878: INFO: Created: latency-svc-kbzgd Feb 4 13:59:49.895: INFO: Got endpoints: latency-svc-kbzgd [942.099969ms] Feb 4 13:59:49.951: INFO: Created: latency-svc-qf99t Feb 4 13:59:50.214: INFO: Got endpoints: latency-svc-qf99t [1.162182243s] Feb 4 13:59:50.216: INFO: Created: latency-svc-vnmhw Feb 4 13:59:50.250: INFO: Created: latency-svc-cbk6l Feb 4 13:59:50.250: INFO: Got endpoints: latency-svc-vnmhw [1.157907291s] Feb 4 13:59:50.265: INFO: Got endpoints: latency-svc-cbk6l [1.058239717s] Feb 4 13:59:50.286: INFO: Created: latency-svc-pdj4w Feb 4 13:59:50.295: INFO: Got endpoints: latency-svc-pdj4w [1.052607326s] Feb 4 13:59:50.353: INFO: Created: latency-svc-5fjgb Feb 4 13:59:50.381: INFO: Got endpoints: latency-svc-5fjgb [1.072017664s] Feb 4 13:59:50.383: INFO: Created: latency-svc-nghjq Feb 4 13:59:50.418: INFO: Got endpoints: latency-svc-nghjq [1.074070679s] Feb 4 13:59:50.442: INFO: Created: latency-svc-mthd4 Feb 4 13:59:50.483: INFO: Got endpoints: latency-svc-mthd4 [1.09967634s] Feb 4 13:59:50.508: INFO: Created: latency-svc-plnx5 Feb 4 13:59:50.529: INFO: Got endpoints: latency-svc-plnx5 [1.074757215s] Feb 4 13:59:50.574: INFO: Created: latency-svc-7slf6 Feb 4 13:59:50.609: INFO: Got endpoints: latency-svc-7slf6 [1.105385787s] Feb 4 13:59:50.633: INFO: Created: latency-svc-bk4ks Feb 4 13:59:50.663: INFO: Got endpoints: latency-svc-bk4ks [1.08337851s] Feb 4 13:59:50.689: INFO: Created: latency-svc-zpmx5 Feb 4 13:59:50.704: INFO: Got endpoints: latency-svc-zpmx5 [1.085994282s] Feb 4 13:59:50.747: INFO: Created: latency-svc-mzrrg Feb 4 13:59:50.777: INFO: Got endpoints: latency-svc-mzrrg [963.861895ms] Feb 4 13:59:50.802: INFO: Created: latency-svc-rbp25 Feb 4 13:59:50.829: INFO: Got endpoints: latency-svc-rbp25 [1.006616524s] Feb 4 13:59:50.879: INFO: Created: latency-svc-snjtv Feb 4 13:59:51.143: INFO: Got endpoints: latency-svc-snjtv [1.283205189s] Feb 4 13:59:51.143: INFO: Created: latency-svc-cq5jc Feb 4 13:59:51.156: INFO: Got endpoints: latency-svc-cq5jc [1.260568114s] Feb 4 13:59:51.179: INFO: Created: latency-svc-z99b2 Feb 4 13:59:51.194: INFO: Got endpoints: latency-svc-z99b2 [979.241685ms] Feb 4 13:59:51.209: INFO: Created: latency-svc-mm5hl Feb 4 13:59:51.227: INFO: Got endpoints: latency-svc-mm5hl [976.373442ms] Feb 4 13:59:51.281: INFO: Created: latency-svc-kbdnq Feb 4 13:59:51.308: INFO: Got endpoints: latency-svc-kbdnq [1.042800861s] Feb 4 13:59:51.335: INFO: Created: latency-svc-7lx6l Feb 4 13:59:51.350: INFO: Got endpoints: latency-svc-7lx6l [1.055006992s] Feb 4 13:59:51.424: INFO: Created: latency-svc-r7xzw Feb 4 13:59:51.461: INFO: Got endpoints: latency-svc-r7xzw [1.079866284s] Feb 4 13:59:51.462: INFO: Created: latency-svc-tms2l Feb 4 13:59:51.497: INFO: Got endpoints: latency-svc-tms2l [1.078566108s] Feb 4 13:59:51.556: INFO: Created: latency-svc-rnt79 Feb 4 13:59:51.575: INFO: Got endpoints: latency-svc-rnt79 [1.091559147s] Feb 4 13:59:51.611: INFO: Created: latency-svc-z6fnt Feb 4 13:59:51.633: INFO: Got endpoints: latency-svc-z6fnt [1.104414995s] Feb 4 13:59:51.688: INFO: Created: latency-svc-cxh5t Feb 4 13:59:51.707: INFO: Got endpoints: latency-svc-cxh5t [1.098133213s] Feb 4 13:59:51.708: INFO: Created: latency-svc-pcsfn Feb 4 13:59:51.750: INFO: Got endpoints: latency-svc-pcsfn [1.086366306s] Feb 4 13:59:51.819: INFO: Created: latency-svc-n64d5 Feb 4 13:59:51.845: INFO: Got endpoints: latency-svc-n64d5 [1.141339285s] Feb 4 13:59:51.901: INFO: Created: latency-svc-9qk4h Feb 4 13:59:51.963: INFO: Got endpoints: latency-svc-9qk4h [1.186036906s] Feb 4 13:59:51.967: INFO: Created: latency-svc-dfq4g Feb 4 13:59:51.978: INFO: Got endpoints: latency-svc-dfq4g [1.148962996s] Feb 4 13:59:52.019: INFO: Created: latency-svc-2284q Feb 4 13:59:52.039: INFO: Got endpoints: latency-svc-2284q [895.859414ms] Feb 4 13:59:52.061: INFO: Created: latency-svc-x9btm Feb 4 13:59:52.118: INFO: Got endpoints: latency-svc-x9btm [962.60677ms] Feb 4 13:59:52.139: INFO: Created: latency-svc-2t694 Feb 4 13:59:52.174: INFO: Got endpoints: latency-svc-2t694 [980.781869ms] Feb 4 13:59:52.212: INFO: Created: latency-svc-kx7ls Feb 4 13:59:52.238: INFO: Got endpoints: latency-svc-kx7ls [1.011028996s] Feb 4 13:59:52.259: INFO: Created: latency-svc-hhxz6 Feb 4 13:59:52.287: INFO: Got endpoints: latency-svc-hhxz6 [979.30087ms] Feb 4 13:59:52.308: INFO: Created: latency-svc-cx62n Feb 4 13:59:52.322: INFO: Got endpoints: latency-svc-cx62n [972.43637ms] Feb 4 13:59:52.376: INFO: Created: latency-svc-wk75n Feb 4 13:59:52.399: INFO: Got endpoints: latency-svc-wk75n [937.462656ms] Feb 4 13:59:52.401: INFO: Created: latency-svc-zxdb7 Feb 4 13:59:52.422: INFO: Got endpoints: latency-svc-zxdb7 [924.757277ms] Feb 4 13:59:52.458: INFO: Created: latency-svc-p6nz7 Feb 4 13:59:52.526: INFO: Got endpoints: latency-svc-p6nz7 [950.919291ms] Feb 4 13:59:52.559: INFO: Created: latency-svc-sfbxm Feb 4 13:59:52.571: INFO: Got endpoints: latency-svc-sfbxm [938.22289ms] Feb 4 13:59:52.595: INFO: Created: latency-svc-mrfpb Feb 4 13:59:52.614: INFO: Got endpoints: latency-svc-mrfpb [906.516401ms] Feb 4 13:59:52.682: INFO: Created: latency-svc-fqqwk Feb 4 13:59:52.715: INFO: Got endpoints: latency-svc-fqqwk [964.850744ms] Feb 4 13:59:52.716: INFO: Created: latency-svc-xfdkh Feb 4 13:59:52.745: INFO: Got endpoints: latency-svc-xfdkh [899.817065ms] Feb 4 13:59:52.843: INFO: Created: latency-svc-g8nvc Feb 4 13:59:52.878: INFO: Got endpoints: latency-svc-g8nvc [914.600888ms] Feb 4 13:59:52.878: INFO: Created: latency-svc-sfg77 Feb 4 13:59:52.907: INFO: Got endpoints: latency-svc-sfg77 [928.980047ms] Feb 4 13:59:52.990: INFO: Created: latency-svc-kwjvm Feb 4 13:59:53.015: INFO: Got endpoints: latency-svc-kwjvm [976.309857ms] Feb 4 13:59:53.016: INFO: Created: latency-svc-v6mlk Feb 4 13:59:53.050: INFO: Got endpoints: latency-svc-v6mlk [932.099628ms] Feb 4 13:59:53.081: INFO: Created: latency-svc-82j85 Feb 4 13:59:53.131: INFO: Got endpoints: latency-svc-82j85 [956.491482ms] Feb 4 13:59:53.147: INFO: Created: latency-svc-w4d8d Feb 4 13:59:53.160: INFO: Got endpoints: latency-svc-w4d8d [922.709212ms] Feb 4 13:59:53.202: INFO: Created: latency-svc-9msx4 Feb 4 13:59:53.268: INFO: Got endpoints: latency-svc-9msx4 [981.084276ms] Feb 4 13:59:53.270: INFO: Created: latency-svc-mxlgl Feb 4 13:59:53.303: INFO: Got endpoints: latency-svc-mxlgl [981.035427ms] Feb 4 13:59:53.327: INFO: Created: latency-svc-plxgb Feb 4 13:59:53.345: INFO: Got endpoints: latency-svc-plxgb [946.171613ms] Feb 4 13:59:53.442: INFO: Created: latency-svc-68mqq Feb 4 13:59:53.471: INFO: Created: latency-svc-pnt4q Feb 4 13:59:53.472: INFO: Got endpoints: latency-svc-68mqq [1.049951947s] Feb 4 13:59:53.476: INFO: Got endpoints: latency-svc-pnt4q [950.357094ms] Feb 4 13:59:53.501: INFO: Created: latency-svc-nmg49 Feb 4 13:59:53.513: INFO: Got endpoints: latency-svc-nmg49 [941.29383ms] Feb 4 13:59:53.586: INFO: Created: latency-svc-797kj Feb 4 13:59:53.633: INFO: Got endpoints: latency-svc-797kj [1.019420205s] Feb 4 13:59:53.634: INFO: Created: latency-svc-pnwsn Feb 4 13:59:53.675: INFO: Got endpoints: latency-svc-pnwsn [960.54124ms] Feb 4 13:59:53.736: INFO: Created: latency-svc-nm4ct Feb 4 13:59:53.754: INFO: Got endpoints: latency-svc-nm4ct [1.00922279s] Feb 4 13:59:53.772: INFO: Created: latency-svc-557nc Feb 4 13:59:53.802: INFO: Got endpoints: latency-svc-557nc [924.421654ms] Feb 4 13:59:53.803: INFO: Latencies: [124.51357ms 142.659981ms 144.171803ms 243.218002ms 280.997084ms 330.728128ms 448.844896ms 484.396754ms 563.381818ms 587.001187ms 674.017221ms 712.060862ms 823.668824ms 861.977056ms 895.859414ms 899.817065ms 906.516401ms 914.600888ms 922.641361ms 922.709212ms 924.421654ms 924.757277ms 928.980047ms 932.099628ms 937.462656ms 938.22289ms 941.29383ms 941.415388ms 942.099969ms 946.171613ms 949.025574ms 950.357094ms 950.919291ms 951.071688ms 956.491482ms 960.54124ms 962.60677ms 963.861895ms 964.850744ms 967.993493ms 969.215373ms 972.43637ms 973.972121ms 976.309857ms 976.373442ms 976.860241ms 979.037215ms 979.241685ms 979.30087ms 980.781869ms 981.035427ms 981.084276ms 982.078302ms 982.198069ms 986.30222ms 994.024689ms 994.033731ms 996.456309ms 999.265594ms 1.000858878s 1.000908171s 1.002463723s 1.005211809s 1.005332058s 1.006209895s 1.006364739s 1.006616524s 1.008157661s 1.008688229s 1.009222549s 1.00922279s 1.009478802s 1.010981074s 1.011028996s 1.0165301s 1.018409236s 1.018996779s 1.019381643s 1.019420205s 1.019612272s 1.020492114s 1.0211479s 1.02130843s 1.022151811s 1.022599149s 1.022632501s 1.022949773s 1.023699563s 1.024305961s 1.02440731s 1.024795824s 1.024970053s 1.026443241s 1.027374328s 1.030189572s 1.032021694s 1.03472507s 1.035033168s 1.035555831s 1.037398279s 1.038626317s 1.039788982s 1.040744061s 1.042081984s 1.042195915s 1.042800861s 1.043346852s 1.045007769s 1.045968504s 1.048083134s 1.048479491s 1.048974703s 1.049380247s 1.049457547s 1.049951947s 1.050453234s 1.050476828s 1.050632725s 1.052607326s 1.054044076s 1.055006992s 1.056664821s 1.057993224s 1.058237475s 1.058239717s 1.059136889s 1.063463045s 1.064834839s 1.064996748s 1.068136171s 1.068802048s 1.070471276s 1.072017664s 1.072183243s 1.074070679s 1.074757215s 1.077115637s 1.078566108s 1.079866284s 1.080145206s 1.08337851s 1.08362601s 1.085994282s 1.086366306s 1.091549114s 1.091559147s 1.093054788s 1.098133213s 1.098873888s 1.099640228s 1.09967634s 1.100278899s 1.100317723s 1.102267607s 1.103755877s 1.103999207s 1.104414995s 1.104927724s 1.105385787s 1.106623257s 1.112889688s 1.13609178s 1.138360969s 1.139085682s 1.141339285s 1.147213024s 1.147645378s 1.148962996s 1.157907291s 1.160905929s 1.162182243s 1.164567376s 1.166788661s 1.186036906s 1.230546223s 1.260568114s 1.283205189s 1.346068678s 1.362843042s 1.401805761s 1.401859376s 1.405997881s 1.454189672s 1.567674348s 1.620920247s 1.638665051s 1.644539896s 1.821142982s 1.825176319s 1.834757901s 1.845020953s 1.845058136s 1.849441401s 2.052403438s 2.151190955s 2.229257305s 2.236583537s 2.239916749s 2.264354169s 2.265514839s] Feb 4 13:59:53.803: INFO: 50 %ile: 1.038626317s Feb 4 13:59:53.803: INFO: 90 %ile: 1.401859376s Feb 4 13:59:53.803: INFO: 99 %ile: 2.264354169s Feb 4 13:59:53.803: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:59:53.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6197" for this suite. • [SLOW TEST:19.290 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":311,"completed":194,"skipped":3329,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:59:53.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting the auto-created API token Feb 4 13:59:54.465: INFO: created pod pod-service-account-defaultsa Feb 4 13:59:54.465: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 4 13:59:54.484: INFO: created pod pod-service-account-mountsa Feb 4 13:59:54.484: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 4 13:59:54.521: INFO: created pod pod-service-account-nomountsa Feb 4 13:59:54.521: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 4 13:59:54.544: INFO: created pod pod-service-account-defaultsa-mountspec Feb 4 13:59:54.544: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 4 13:59:54.593: INFO: created pod pod-service-account-mountsa-mountspec Feb 4 13:59:54.593: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 4 13:59:54.658: INFO: created pod pod-service-account-nomountsa-mountspec Feb 4 13:59:54.658: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 4 13:59:54.666: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 4 13:59:54.666: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 4 13:59:54.695: INFO: created pod pod-service-account-mountsa-nomountspec Feb 4 13:59:54.695: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 4 13:59:54.738: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 4 13:59:54.738: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 13:59:54.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6642" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":311,"completed":195,"skipped":3341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 13:59:55.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 4 13:59:55.299: INFO: Waiting up to 5m0s for pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee" in namespace "emptydir-8437" to be "Succeeded or Failed" Feb 4 13:59:55.346: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee": Phase="Pending", Reason="", readiness=false. Elapsed: 46.02286ms Feb 4 13:59:57.350: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050257493s Feb 4 13:59:59.657: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357864851s Feb 4 14:00:02.238: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.938464452s Feb 4 14:00:04.724: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.424455056s Feb 4 14:00:06.773: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee": Phase="Pending", Reason="", readiness=false. Elapsed: 11.473443975s Feb 4 14:00:08.807: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.50705329s STEP: Saw pod success Feb 4 14:00:08.807: INFO: Pod "pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee" satisfied condition "Succeeded or Failed" Feb 4 14:00:08.850: INFO: Trying to get logs from node latest-worker pod pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee container test-container: STEP: delete the pod Feb 4 14:00:09.042: INFO: Waiting for pod pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee to disappear Feb 4 14:00:09.094: INFO: Pod pod-a1b56728-b880-4f17-9b7b-d5b0e31695ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:00:09.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8437" for this suite. • [SLOW TEST:14.177 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":196,"skipped":3415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:00:09.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name cm-test-opt-del-9eefb143-2754-4cdd-a0ec-5162ebe7fbe0 STEP: Creating configMap with name cm-test-opt-upd-b55e21ef-ab29-44bc-887b-c841aca9f2fd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9eefb143-2754-4cdd-a0ec-5162ebe7fbe0 STEP: Updating configmap cm-test-opt-upd-b55e21ef-ab29-44bc-887b-c841aca9f2fd STEP: Creating configMap with name cm-test-opt-create-3428d858-a657-41e4-9c1e-a6de6f24c8ba STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:00:23.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6097" for this suite. • [SLOW TEST:14.118 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":197,"skipped":3449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:00:23.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:00:23.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6918" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":311,"completed":198,"skipped":3519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:00:23.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:00:23.765: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 4 14:00:24.949: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:00:25.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4190" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":311,"completed":199,"skipped":3555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:00:25.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:00:32.428: INFO: Waiting up to 5m0s for pod "client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726" in namespace "pods-7494" to be "Succeeded or Failed" Feb 4 14:00:32.546: INFO: Pod "client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726": Phase="Pending", Reason="", readiness=false. Elapsed: 118.189012ms Feb 4 14:00:35.239: INFO: Pod "client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.81059068s Feb 4 14:00:37.373: INFO: Pod "client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726": Phase="Pending", Reason="", readiness=false. Elapsed: 4.944494493s Feb 4 14:00:39.376: INFO: Pod "client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.9476637s STEP: Saw pod success Feb 4 14:00:39.376: INFO: Pod "client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726" satisfied condition "Succeeded or Failed" Feb 4 14:00:39.419: INFO: Trying to get logs from node latest-worker pod client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726 container env3cont: STEP: delete the pod Feb 4 14:00:39.560: INFO: Waiting for pod client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726 to disappear Feb 4 14:00:39.587: INFO: Pod client-envvars-1c6ebf1b-c7f3-4231-8d35-3cc9005ae726 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:00:39.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7494" for this suite. • [SLOW TEST:14.574 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":311,"completed":200,"skipped":3578,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:00:39.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap configmap-2857/configmap-test-4caf55d4-9760-4810-a868-6fa8dbc17512 STEP: Creating a pod to test consume configMaps Feb 4 14:00:39.911: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a" in namespace "configmap-2857" to be "Succeeded or Failed" Feb 4 14:00:39.954: INFO: Pod "pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.859248ms Feb 4 14:00:42.461: INFO: Pod "pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550050463s Feb 4 14:00:44.510: INFO: Pod "pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a": Phase="Running", Reason="", readiness=true. Elapsed: 4.599308685s Feb 4 14:00:46.606: INFO: Pod "pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.695402575s STEP: Saw pod success Feb 4 14:00:46.606: INFO: Pod "pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a" satisfied condition "Succeeded or Failed" Feb 4 14:00:46.620: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a container env-test: STEP: delete the pod Feb 4 14:00:46.758: INFO: Waiting for pod pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a to disappear Feb 4 14:00:46.769: INFO: Pod pod-configmaps-ba224372-64a5-497f-9d21-f68e2385b92a no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:00:46.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2857" for this suite. • [SLOW TEST:7.073 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":311,"completed":201,"skipped":3587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:00:46.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 4 14:00:46.957: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 14:00:47.042: INFO: Waiting for terminating namespaces to be deleted... Feb 4 14:00:47.275: INFO: Logging pods the apiserver thinks is on node latest-worker before test Feb 4 14:00:47.451: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.451: INFO: Container chaos-mesh ready: true, restart count 0 Feb 4 14:00:47.451: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.451: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 14:00:47.451: INFO: coredns-74ff55c5b-zzl9d from kube-system started at 2021-02-04 13:09:59 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.451: INFO: Container coredns ready: true, restart count 0 Feb 4 14:00:47.451: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.451: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 14:00:47.451: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.451: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 14:00:47.451: INFO: server-envvars-122c3e7d-b158-4113-a4cf-74c5ae3ce0ca from pods-7494 started at 2021-02-04 14:00:25 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.452: INFO: Container srv ready: true, restart count 0 Feb 4 14:00:47.452: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Feb 4 14:00:47.479: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.479: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 14:00:47.479: INFO: coredns-74ff55c5b-674bk from kube-system started at 2021-02-04 13:09:59 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.479: INFO: Container coredns ready: true, restart count 0 Feb 4 14:00:47.479: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.479: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 14:00:47.479: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:00:47.479: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-75d1abf9-7ce7-450f-afdc-b8c215fd0af0 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.14 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.14 but use UDP protocol on the node which pod2 resides STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 4 14:01:12.359: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:12.359: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:12.398868 7 log.go:181] (0xc002ef56b0) (0xc001bfafa0) Create stream I0204 14:01:12.398906 7 log.go:181] (0xc002ef56b0) (0xc001bfafa0) Stream added, broadcasting: 1 I0204 14:01:12.400939 7 log.go:181] (0xc002ef56b0) Reply frame received for 1 I0204 14:01:12.400982 7 log.go:181] (0xc002ef56b0) (0xc001e545a0) Create stream I0204 14:01:12.400997 7 log.go:181] (0xc002ef56b0) (0xc001e545a0) Stream added, broadcasting: 3 I0204 14:01:12.401961 7 log.go:181] (0xc002ef56b0) Reply frame received for 3 I0204 14:01:12.402001 7 log.go:181] (0xc002ef56b0) (0xc003d301e0) Create stream I0204 14:01:12.402016 7 log.go:181] (0xc002ef56b0) (0xc003d301e0) Stream added, broadcasting: 5 I0204 14:01:12.403153 7 log.go:181] (0xc002ef56b0) Reply frame received for 5 I0204 14:01:12.481350 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.481384 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.481395 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.481402 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.481408 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.481461 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.481476 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.481484 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.481500 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.481511 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.481518 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.481533 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.481539 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.481546 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.481567 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.481577 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.481583 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.481598 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.481989 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.482040 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.482067 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.482090 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.482113 7 log.go:181] (0xc002ef56b0) Data frame received for 3 I0204 14:01:12.482138 7 log.go:181] (0xc001e545a0) (3) Data frame handling I0204 14:01:12.482154 7 log.go:181] (0xc001e545a0) (3) Data frame sent I0204 14:01:12.482179 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.482211 7 log.go:181] (0xc003d301e0) (5) Data frame sent I0204 14:01:12.482740 7 log.go:181] (0xc002ef56b0) Data frame received for 5 I0204 14:01:12.482752 7 log.go:181] (0xc003d301e0) (5) Data frame handling I0204 14:01:12.482802 7 log.go:181] (0xc002ef56b0) Data frame received for 3 I0204 14:01:12.482830 7 log.go:181] (0xc001e545a0) (3) Data frame handling I0204 14:01:12.485177 7 log.go:181] (0xc002ef56b0) Data frame received for 1 I0204 14:01:12.485214 7 log.go:181] (0xc001bfafa0) (1) Data frame handling I0204 14:01:12.485240 7 log.go:181] (0xc001bfafa0) (1) Data frame sent I0204 14:01:12.485272 7 log.go:181] (0xc002ef56b0) (0xc001bfafa0) Stream removed, broadcasting: 1 I0204 14:01:12.485335 7 log.go:181] (0xc002ef56b0) Go away received I0204 14:01:12.485385 7 log.go:181] (0xc002ef56b0) (0xc001bfafa0) Stream removed, broadcasting: 1 I0204 14:01:12.485438 7 log.go:181] (0xc002ef56b0) (0xc001e545a0) Stream removed, broadcasting: 3 I0204 14:01:12.485467 7 log.go:181] (0xc002ef56b0) (0xc003d301e0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Feb 4 14:01:12.485: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:12.485: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:12.514120 7 log.go:181] (0xc000a42b00) (0xc00114c0a0) Create stream I0204 14:01:12.514163 7 log.go:181] (0xc000a42b00) (0xc00114c0a0) Stream added, broadcasting: 1 I0204 14:01:12.516077 7 log.go:181] (0xc000a42b00) Reply frame received for 1 I0204 14:01:12.516119 7 log.go:181] (0xc000a42b00) (0xc003d30280) Create stream I0204 14:01:12.516151 7 log.go:181] (0xc000a42b00) (0xc003d30280) Stream added, broadcasting: 3 I0204 14:01:12.517218 7 log.go:181] (0xc000a42b00) Reply frame received for 3 I0204 14:01:12.517250 7 log.go:181] (0xc000a42b00) (0xc003e919a0) Create stream I0204 14:01:12.517261 7 log.go:181] (0xc000a42b00) (0xc003e919a0) Stream added, broadcasting: 5 I0204 14:01:12.518161 7 log.go:181] (0xc000a42b00) Reply frame received for 5 I0204 14:01:12.575756 7 log.go:181] (0xc000a42b00) Data frame received for 5 I0204 14:01:12.575786 7 log.go:181] (0xc003e919a0) (5) Data frame handling I0204 14:01:12.575805 7 log.go:181] (0xc003e919a0) (5) Data frame sent I0204 14:01:12.575835 7 log.go:181] (0xc000a42b00) Data frame received for 5 I0204 14:01:12.575846 7 log.go:181] (0xc003e919a0) (5) Data frame handling I0204 14:01:12.575858 7 log.go:181] (0xc003e919a0) (5) Data frame sent I0204 14:01:12.575866 7 log.go:181] (0xc000a42b00) Data frame received for 5 I0204 14:01:12.575877 7 log.go:181] (0xc003e919a0) (5) Data frame handling I0204 14:01:12.575903 7 log.go:181] (0xc003e919a0) (5) Data frame sent I0204 14:01:12.576460 7 log.go:181] (0xc000a42b00) Data frame received for 3 I0204 14:01:12.576476 7 log.go:181] (0xc003d30280) (3) Data frame handling I0204 14:01:12.576485 7 log.go:181] (0xc003d30280) (3) Data frame sent I0204 14:01:12.576496 7 log.go:181] (0xc000a42b00) Data frame received for 5 I0204 14:01:12.576515 7 log.go:181] (0xc003e919a0) (5) Data frame handling I0204 14:01:12.576547 7 log.go:181] (0xc003e919a0) (5) Data frame sent I0204 14:01:12.576829 7 log.go:181] (0xc000a42b00) Data frame received for 3 I0204 14:01:12.576934 7 log.go:181] (0xc003d30280) (3) Data frame handling I0204 14:01:12.577044 7 log.go:181] (0xc000a42b00) Data frame received for 5 I0204 14:01:12.577060 7 log.go:181] (0xc003e919a0) (5) Data frame handling I0204 14:01:12.578856 7 log.go:181] (0xc000a42b00) Data frame received for 1 I0204 14:01:12.578875 7 log.go:181] (0xc00114c0a0) (1) Data frame handling I0204 14:01:12.578901 7 log.go:181] (0xc00114c0a0) (1) Data frame sent I0204 14:01:12.579000 7 log.go:181] (0xc000a42b00) (0xc00114c0a0) Stream removed, broadcasting: 1 I0204 14:01:12.579022 7 log.go:181] (0xc000a42b00) Go away received I0204 14:01:12.579086 7 log.go:181] (0xc000a42b00) (0xc00114c0a0) Stream removed, broadcasting: 1 I0204 14:01:12.579108 7 log.go:181] (0xc000a42b00) (0xc003d30280) Stream removed, broadcasting: 3 I0204 14:01:12.579124 7 log.go:181] (0xc000a42b00) (0xc003e919a0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Feb 4 14:01:12.579: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:12.579: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:12.602615 7 log.go:181] (0xc0023efad0) (0xc003e91ea0) Create stream I0204 14:01:12.602647 7 log.go:181] (0xc0023efad0) (0xc003e91ea0) Stream added, broadcasting: 1 I0204 14:01:12.604068 7 log.go:181] (0xc0023efad0) Reply frame received for 1 I0204 14:01:12.604101 7 log.go:181] (0xc0023efad0) (0xc001bfb0e0) Create stream I0204 14:01:12.604112 7 log.go:181] (0xc0023efad0) (0xc001bfb0e0) Stream added, broadcasting: 3 I0204 14:01:12.604945 7 log.go:181] (0xc0023efad0) Reply frame received for 3 I0204 14:01:12.604989 7 log.go:181] (0xc0023efad0) (0xc00114c140) Create stream I0204 14:01:12.605005 7 log.go:181] (0xc0023efad0) (0xc00114c140) Stream added, broadcasting: 5 I0204 14:01:12.605768 7 log.go:181] (0xc0023efad0) Reply frame received for 5 I0204 14:01:17.666006 7 log.go:181] (0xc0023efad0) Data frame received for 5 I0204 14:01:17.666053 7 log.go:181] (0xc00114c140) (5) Data frame handling I0204 14:01:17.666070 7 log.go:181] (0xc00114c140) (5) Data frame sent I0204 14:01:17.666087 7 log.go:181] (0xc0023efad0) Data frame received for 5 I0204 14:01:17.666122 7 log.go:181] (0xc00114c140) (5) Data frame handling I0204 14:01:17.666187 7 log.go:181] (0xc0023efad0) Data frame received for 3 I0204 14:01:17.666225 7 log.go:181] (0xc001bfb0e0) (3) Data frame handling I0204 14:01:17.668361 7 log.go:181] (0xc0023efad0) Data frame received for 1 I0204 14:01:17.668387 7 log.go:181] (0xc003e91ea0) (1) Data frame handling I0204 14:01:17.668401 7 log.go:181] (0xc003e91ea0) (1) Data frame sent I0204 14:01:17.668415 7 log.go:181] (0xc0023efad0) (0xc003e91ea0) Stream removed, broadcasting: 1 I0204 14:01:17.668481 7 log.go:181] (0xc0023efad0) Go away received I0204 14:01:17.668593 7 log.go:181] (0xc0023efad0) (0xc003e91ea0) Stream removed, broadcasting: 1 I0204 14:01:17.668623 7 log.go:181] (0xc0023efad0) (0xc001bfb0e0) Stream removed, broadcasting: 3 I0204 14:01:17.668650 7 log.go:181] (0xc0023efad0) (0xc00114c140) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 4 14:01:17.668: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:17.668: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:17.702610 7 log.go:181] (0xc000a431e0) (0xc00114c3c0) Create stream I0204 14:01:17.702645 7 log.go:181] (0xc000a431e0) (0xc00114c3c0) Stream added, broadcasting: 1 I0204 14:01:17.707849 7 log.go:181] (0xc000a431e0) Reply frame received for 1 I0204 14:01:17.707894 7 log.go:181] (0xc000a431e0) (0xc00214a820) Create stream I0204 14:01:17.707907 7 log.go:181] (0xc000a431e0) (0xc00214a820) Stream added, broadcasting: 3 I0204 14:01:17.709382 7 log.go:181] (0xc000a431e0) Reply frame received for 3 I0204 14:01:17.709415 7 log.go:181] (0xc000a431e0) (0xc00214a8c0) Create stream I0204 14:01:17.709429 7 log.go:181] (0xc000a431e0) (0xc00214a8c0) Stream added, broadcasting: 5 I0204 14:01:17.710840 7 log.go:181] (0xc000a431e0) Reply frame received for 5 I0204 14:01:17.767308 7 log.go:181] (0xc000a431e0) Data frame received for 5 I0204 14:01:17.767425 7 log.go:181] (0xc00214a8c0) (5) Data frame handling I0204 14:01:17.767446 7 log.go:181] (0xc00214a8c0) (5) Data frame sent I0204 14:01:17.767476 7 log.go:181] (0xc000a431e0) Data frame received for 3 I0204 14:01:17.767531 7 log.go:181] (0xc00214a820) (3) Data frame handling I0204 14:01:17.767564 7 log.go:181] (0xc00214a820) (3) Data frame sent I0204 14:01:17.767590 7 log.go:181] (0xc000a431e0) Data frame received for 5 I0204 14:01:17.767608 7 log.go:181] (0xc00214a8c0) (5) Data frame handling I0204 14:01:17.767631 7 log.go:181] (0xc00214a8c0) (5) Data frame sent I0204 14:01:17.767649 7 log.go:181] (0xc000a431e0) Data frame received for 5 I0204 14:01:17.767663 7 log.go:181] (0xc00214a8c0) (5) Data frame handling I0204 14:01:17.767682 7 log.go:181] (0xc000a431e0) Data frame received for 3 I0204 14:01:17.767696 7 log.go:181] (0xc00214a820) (3) Data frame handling I0204 14:01:17.767764 7 log.go:181] (0xc00214a8c0) (5) Data frame sent I0204 14:01:17.767872 7 log.go:181] (0xc000a431e0) Data frame received for 5 I0204 14:01:17.767911 7 log.go:181] (0xc00214a8c0) (5) Data frame handling I0204 14:01:17.769279 7 log.go:181] (0xc000a431e0) Data frame received for 1 I0204 14:01:17.769292 7 log.go:181] (0xc00114c3c0) (1) Data frame handling I0204 14:01:17.769300 7 log.go:181] (0xc00114c3c0) (1) Data frame sent I0204 14:01:17.769545 7 log.go:181] (0xc000a431e0) (0xc00114c3c0) Stream removed, broadcasting: 1 I0204 14:01:17.769612 7 log.go:181] (0xc000a431e0) (0xc00114c3c0) Stream removed, broadcasting: 1 I0204 14:01:17.769620 7 log.go:181] (0xc000a431e0) (0xc00214a820) Stream removed, broadcasting: 3 I0204 14:01:17.769734 7 log.go:181] (0xc000a431e0) Go away received I0204 14:01:17.769776 7 log.go:181] (0xc000a431e0) (0xc00214a8c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Feb 4 14:01:17.769: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:17.769: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:17.799722 7 log.go:181] (0xc000a438c0) (0xc00114c780) Create stream I0204 14:01:17.799759 7 log.go:181] (0xc000a438c0) (0xc00114c780) Stream added, broadcasting: 1 I0204 14:01:17.801216 7 log.go:181] (0xc000a438c0) Reply frame received for 1 I0204 14:01:17.801246 7 log.go:181] (0xc000a438c0) (0xc00114c820) Create stream I0204 14:01:17.801265 7 log.go:181] (0xc000a438c0) (0xc00114c820) Stream added, broadcasting: 3 I0204 14:01:17.802124 7 log.go:181] (0xc000a438c0) Reply frame received for 3 I0204 14:01:17.802183 7 log.go:181] (0xc000a438c0) (0xc003d30320) Create stream I0204 14:01:17.802202 7 log.go:181] (0xc000a438c0) (0xc003d30320) Stream added, broadcasting: 5 I0204 14:01:17.803106 7 log.go:181] (0xc000a438c0) Reply frame received for 5 I0204 14:01:17.860642 7 log.go:181] (0xc000a438c0) Data frame received for 5 I0204 14:01:17.860695 7 log.go:181] (0xc003d30320) (5) Data frame handling I0204 14:01:17.860718 7 log.go:181] (0xc003d30320) (5) Data frame sent I0204 14:01:17.860729 7 log.go:181] (0xc000a438c0) Data frame received for 5 I0204 14:01:17.860740 7 log.go:181] (0xc003d30320) (5) Data frame handling I0204 14:01:17.860756 7 log.go:181] (0xc003d30320) (5) Data frame sent I0204 14:01:17.860766 7 log.go:181] (0xc000a438c0) Data frame received for 5 I0204 14:01:17.860777 7 log.go:181] (0xc003d30320) (5) Data frame handling I0204 14:01:17.860790 7 log.go:181] (0xc003d30320) (5) Data frame sent I0204 14:01:17.860800 7 log.go:181] (0xc000a438c0) Data frame received for 5 I0204 14:01:17.860810 7 log.go:181] (0xc003d30320) (5) Data frame handling I0204 14:01:17.860824 7 log.go:181] (0xc003d30320) (5) Data frame sent I0204 14:01:17.861517 7 log.go:181] (0xc000a438c0) Data frame received for 3 I0204 14:01:17.861546 7 log.go:181] (0xc00114c820) (3) Data frame handling I0204 14:01:17.861558 7 log.go:181] (0xc00114c820) (3) Data frame sent I0204 14:01:17.861576 7 log.go:181] (0xc000a438c0) Data frame received for 5 I0204 14:01:17.861587 7 log.go:181] (0xc003d30320) (5) Data frame handling I0204 14:01:17.861598 7 log.go:181] (0xc003d30320) (5) Data frame sent I0204 14:01:17.861609 7 log.go:181] (0xc000a438c0) Data frame received for 5 I0204 14:01:17.861620 7 log.go:181] (0xc003d30320) (5) Data frame handling I0204 14:01:17.861634 7 log.go:181] (0xc003d30320) (5) Data frame sent I0204 14:01:17.862285 7 log.go:181] (0xc000a438c0) Data frame received for 3 I0204 14:01:17.862297 7 log.go:181] (0xc00114c820) (3) Data frame handling I0204 14:01:17.862325 7 log.go:181] (0xc000a438c0) Data frame received for 5 I0204 14:01:17.862367 7 log.go:181] (0xc003d30320) (5) Data frame handling I0204 14:01:17.863694 7 log.go:181] (0xc000a438c0) Data frame received for 1 I0204 14:01:17.863715 7 log.go:181] (0xc00114c780) (1) Data frame handling I0204 14:01:17.863736 7 log.go:181] (0xc00114c780) (1) Data frame sent I0204 14:01:17.863749 7 log.go:181] (0xc000a438c0) (0xc00114c780) Stream removed, broadcasting: 1 I0204 14:01:17.863766 7 log.go:181] (0xc000a438c0) Go away received I0204 14:01:17.863862 7 log.go:181] (0xc000a438c0) (0xc00114c780) Stream removed, broadcasting: 1 I0204 14:01:17.863897 7 log.go:181] (0xc000a438c0) (0xc00114c820) Stream removed, broadcasting: 3 I0204 14:01:17.863924 7 log.go:181] (0xc000a438c0) (0xc003d30320) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Feb 4 14:01:17.863: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:17.863: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:17.895063 7 log.go:181] (0xc003b0a370) (0xc001bfb360) Create stream I0204 14:01:17.895104 7 log.go:181] (0xc003b0a370) (0xc001bfb360) Stream added, broadcasting: 1 I0204 14:01:17.897415 7 log.go:181] (0xc003b0a370) Reply frame received for 1 I0204 14:01:17.897453 7 log.go:181] (0xc003b0a370) (0xc003e91f40) Create stream I0204 14:01:17.897466 7 log.go:181] (0xc003b0a370) (0xc003e91f40) Stream added, broadcasting: 3 I0204 14:01:17.898504 7 log.go:181] (0xc003b0a370) Reply frame received for 3 I0204 14:01:17.898553 7 log.go:181] (0xc003b0a370) (0xc001bfb400) Create stream I0204 14:01:17.898581 7 log.go:181] (0xc003b0a370) (0xc001bfb400) Stream added, broadcasting: 5 I0204 14:01:17.899488 7 log.go:181] (0xc003b0a370) Reply frame received for 5 I0204 14:01:22.972966 7 log.go:181] (0xc003b0a370) Data frame received for 5 I0204 14:01:22.973035 7 log.go:181] (0xc001bfb400) (5) Data frame handling I0204 14:01:22.973056 7 log.go:181] (0xc001bfb400) (5) Data frame sent I0204 14:01:22.973079 7 log.go:181] (0xc003b0a370) Data frame received for 5 I0204 14:01:22.973120 7 log.go:181] (0xc001bfb400) (5) Data frame handling I0204 14:01:22.973224 7 log.go:181] (0xc003b0a370) Data frame received for 3 I0204 14:01:22.973255 7 log.go:181] (0xc003e91f40) (3) Data frame handling I0204 14:01:22.975511 7 log.go:181] (0xc003b0a370) Data frame received for 1 I0204 14:01:22.975551 7 log.go:181] (0xc001bfb360) (1) Data frame handling I0204 14:01:22.975594 7 log.go:181] (0xc001bfb360) (1) Data frame sent I0204 14:01:22.975732 7 log.go:181] (0xc003b0a370) (0xc001bfb360) Stream removed, broadcasting: 1 I0204 14:01:22.975759 7 log.go:181] (0xc003b0a370) Go away received I0204 14:01:22.975877 7 log.go:181] (0xc003b0a370) (0xc001bfb360) Stream removed, broadcasting: 1 I0204 14:01:22.975914 7 log.go:181] (0xc003b0a370) (0xc003e91f40) Stream removed, broadcasting: 3 I0204 14:01:22.975958 7 log.go:181] (0xc003b0a370) (0xc001bfb400) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 4 14:01:22.976: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:22.976: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:23.006316 7 log.go:181] (0xc000768dc0) (0xc003d30640) Create stream I0204 14:01:23.006358 7 log.go:181] (0xc000768dc0) (0xc003d30640) Stream added, broadcasting: 1 I0204 14:01:23.008022 7 log.go:181] (0xc000768dc0) Reply frame received for 1 I0204 14:01:23.008053 7 log.go:181] (0xc000768dc0) (0xc003928000) Create stream I0204 14:01:23.008062 7 log.go:181] (0xc000768dc0) (0xc003928000) Stream added, broadcasting: 3 I0204 14:01:23.008802 7 log.go:181] (0xc000768dc0) Reply frame received for 3 I0204 14:01:23.008819 7 log.go:181] (0xc000768dc0) (0xc003d306e0) Create stream I0204 14:01:23.008828 7 log.go:181] (0xc000768dc0) (0xc003d306e0) Stream added, broadcasting: 5 I0204 14:01:23.009948 7 log.go:181] (0xc000768dc0) Reply frame received for 5 I0204 14:01:23.099512 7 log.go:181] (0xc000768dc0) Data frame received for 5 I0204 14:01:23.099545 7 log.go:181] (0xc003d306e0) (5) Data frame handling I0204 14:01:23.099554 7 log.go:181] (0xc003d306e0) (5) Data frame sent I0204 14:01:23.099565 7 log.go:181] (0xc000768dc0) Data frame received for 3 I0204 14:01:23.099570 7 log.go:181] (0xc003928000) (3) Data frame handling I0204 14:01:23.099575 7 log.go:181] (0xc003928000) (3) Data frame sent I0204 14:01:23.099994 7 log.go:181] (0xc000768dc0) Data frame received for 5 I0204 14:01:23.100010 7 log.go:181] (0xc003d306e0) (5) Data frame handling I0204 14:01:23.100123 7 log.go:181] (0xc000768dc0) Data frame received for 3 I0204 14:01:23.100160 7 log.go:181] (0xc003928000) (3) Data frame handling I0204 14:01:23.102047 7 log.go:181] (0xc000768dc0) Data frame received for 1 I0204 14:01:23.102093 7 log.go:181] (0xc003d30640) (1) Data frame handling I0204 14:01:23.102114 7 log.go:181] (0xc003d30640) (1) Data frame sent I0204 14:01:23.102133 7 log.go:181] (0xc000768dc0) (0xc003d30640) Stream removed, broadcasting: 1 I0204 14:01:23.102226 7 log.go:181] (0xc000768dc0) (0xc003d30640) Stream removed, broadcasting: 1 I0204 14:01:23.102246 7 log.go:181] (0xc000768dc0) (0xc003928000) Stream removed, broadcasting: 3 I0204 14:01:23.102263 7 log.go:181] (0xc000768dc0) (0xc003d306e0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 I0204 14:01:23.102314 7 log.go:181] (0xc000768dc0) Go away received Feb 4 14:01:23.102: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:23.102: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:23.135456 7 log.go:181] (0xc003c06000) (0xc00114caa0) Create stream I0204 14:01:23.135482 7 log.go:181] (0xc003c06000) (0xc00114caa0) Stream added, broadcasting: 1 I0204 14:01:23.137475 7 log.go:181] (0xc003c06000) Reply frame received for 1 I0204 14:01:23.137513 7 log.go:181] (0xc003c06000) (0xc00214a960) Create stream I0204 14:01:23.137528 7 log.go:181] (0xc003c06000) (0xc00214a960) Stream added, broadcasting: 3 I0204 14:01:23.138424 7 log.go:181] (0xc003c06000) Reply frame received for 3 I0204 14:01:23.138459 7 log.go:181] (0xc003c06000) (0xc0039280a0) Create stream I0204 14:01:23.138468 7 log.go:181] (0xc003c06000) (0xc0039280a0) Stream added, broadcasting: 5 I0204 14:01:23.139381 7 log.go:181] (0xc003c06000) Reply frame received for 5 I0204 14:01:23.199976 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.200022 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.200080 7 log.go:181] (0xc0039280a0) (5) Data frame sent I0204 14:01:23.200098 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.200108 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.200162 7 log.go:181] (0xc0039280a0) (5) Data frame sent I0204 14:01:23.200172 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.200181 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.200195 7 log.go:181] (0xc0039280a0) (5) Data frame sent I0204 14:01:23.200205 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.200214 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.200225 7 log.go:181] (0xc0039280a0) (5) Data frame sent I0204 14:01:23.200234 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.200243 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.200252 7 log.go:181] (0xc0039280a0) (5) Data frame sent I0204 14:01:23.200388 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.200409 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.200427 7 log.go:181] (0xc0039280a0) (5) Data frame sent I0204 14:01:23.200441 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.200446 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.200459 7 log.go:181] (0xc003c06000) Data frame received for 3 I0204 14:01:23.200481 7 log.go:181] (0xc0039280a0) (5) Data frame sent I0204 14:01:23.200518 7 log.go:181] (0xc00214a960) (3) Data frame handling I0204 14:01:23.200543 7 log.go:181] (0xc00214a960) (3) Data frame sent I0204 14:01:23.201384 7 log.go:181] (0xc003c06000) Data frame received for 3 I0204 14:01:23.201436 7 log.go:181] (0xc00214a960) (3) Data frame handling I0204 14:01:23.201467 7 log.go:181] (0xc003c06000) Data frame received for 5 I0204 14:01:23.201500 7 log.go:181] (0xc0039280a0) (5) Data frame handling I0204 14:01:23.202882 7 log.go:181] (0xc003c06000) Data frame received for 1 I0204 14:01:23.202910 7 log.go:181] (0xc00114caa0) (1) Data frame handling I0204 14:01:23.202933 7 log.go:181] (0xc00114caa0) (1) Data frame sent I0204 14:01:23.202971 7 log.go:181] (0xc003c06000) (0xc00114caa0) Stream removed, broadcasting: 1 I0204 14:01:23.203064 7 log.go:181] (0xc003c06000) Go away received I0204 14:01:23.203103 7 log.go:181] (0xc003c06000) (0xc00114caa0) Stream removed, broadcasting: 1 I0204 14:01:23.203130 7 log.go:181] (0xc003c06000) (0xc00214a960) Stream removed, broadcasting: 3 I0204 14:01:23.203141 7 log.go:181] (0xc003c06000) (0xc0039280a0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Feb 4 14:01:23.203: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:23.203: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:23.233999 7 log.go:181] (0xc00094d3f0) (0xc00214ac80) Create stream I0204 14:01:23.234040 7 log.go:181] (0xc00094d3f0) (0xc00214ac80) Stream added, broadcasting: 1 I0204 14:01:23.236072 7 log.go:181] (0xc00094d3f0) Reply frame received for 1 I0204 14:01:23.236123 7 log.go:181] (0xc00094d3f0) (0xc003928140) Create stream I0204 14:01:23.236142 7 log.go:181] (0xc00094d3f0) (0xc003928140) Stream added, broadcasting: 3 I0204 14:01:23.237454 7 log.go:181] (0xc00094d3f0) Reply frame received for 3 I0204 14:01:23.237502 7 log.go:181] (0xc00094d3f0) (0xc003d30780) Create stream I0204 14:01:23.237523 7 log.go:181] (0xc00094d3f0) (0xc003d30780) Stream added, broadcasting: 5 I0204 14:01:23.238459 7 log.go:181] (0xc00094d3f0) Reply frame received for 5 I0204 14:01:28.297919 7 log.go:181] (0xc00094d3f0) Data frame received for 5 I0204 14:01:28.297974 7 log.go:181] (0xc003d30780) (5) Data frame handling I0204 14:01:28.297999 7 log.go:181] (0xc003d30780) (5) Data frame sent I0204 14:01:28.298097 7 log.go:181] (0xc00094d3f0) Data frame received for 3 I0204 14:01:28.298151 7 log.go:181] (0xc003928140) (3) Data frame handling I0204 14:01:28.298240 7 log.go:181] (0xc00094d3f0) Data frame received for 5 I0204 14:01:28.298272 7 log.go:181] (0xc003d30780) (5) Data frame handling I0204 14:01:28.300159 7 log.go:181] (0xc00094d3f0) Data frame received for 1 I0204 14:01:28.300206 7 log.go:181] (0xc00214ac80) (1) Data frame handling I0204 14:01:28.300222 7 log.go:181] (0xc00214ac80) (1) Data frame sent I0204 14:01:28.300232 7 log.go:181] (0xc00094d3f0) (0xc00214ac80) Stream removed, broadcasting: 1 I0204 14:01:28.300245 7 log.go:181] (0xc00094d3f0) Go away received I0204 14:01:28.300428 7 log.go:181] (0xc00094d3f0) (0xc00214ac80) Stream removed, broadcasting: 1 I0204 14:01:28.300464 7 log.go:181] (0xc00094d3f0) (0xc003928140) Stream removed, broadcasting: 3 I0204 14:01:28.300490 7 log.go:181] (0xc00094d3f0) (0xc003d30780) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 4 14:01:28.300: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:28.300: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:28.341917 7 log.go:181] (0xc003d20210) (0xc0039283c0) Create stream I0204 14:01:28.341951 7 log.go:181] (0xc003d20210) (0xc0039283c0) Stream added, broadcasting: 1 I0204 14:01:28.343806 7 log.go:181] (0xc003d20210) Reply frame received for 1 I0204 14:01:28.343825 7 log.go:181] (0xc003d20210) (0xc001bfb4a0) Create stream I0204 14:01:28.343832 7 log.go:181] (0xc003d20210) (0xc001bfb4a0) Stream added, broadcasting: 3 I0204 14:01:28.344569 7 log.go:181] (0xc003d20210) Reply frame received for 3 I0204 14:01:28.344593 7 log.go:181] (0xc003d20210) (0xc00114cb40) Create stream I0204 14:01:28.344600 7 log.go:181] (0xc003d20210) (0xc00114cb40) Stream added, broadcasting: 5 I0204 14:01:28.345850 7 log.go:181] (0xc003d20210) Reply frame received for 5 I0204 14:01:28.411644 7 log.go:181] (0xc003d20210) Data frame received for 5 I0204 14:01:28.411681 7 log.go:181] (0xc00114cb40) (5) Data frame handling I0204 14:01:28.411703 7 log.go:181] (0xc00114cb40) (5) Data frame sent I0204 14:01:28.411718 7 log.go:181] (0xc003d20210) Data frame received for 5 I0204 14:01:28.411734 7 log.go:181] (0xc00114cb40) (5) Data frame handling I0204 14:01:28.411792 7 log.go:181] (0xc00114cb40) (5) Data frame sent I0204 14:01:28.411809 7 log.go:181] (0xc003d20210) Data frame received for 5 I0204 14:01:28.411817 7 log.go:181] (0xc00114cb40) (5) Data frame handling I0204 14:01:28.411839 7 log.go:181] (0xc00114cb40) (5) Data frame sent I0204 14:01:28.411851 7 log.go:181] (0xc003d20210) Data frame received for 5 I0204 14:01:28.411859 7 log.go:181] (0xc00114cb40) (5) Data frame handling I0204 14:01:28.411883 7 log.go:181] (0xc00114cb40) (5) Data frame sent I0204 14:01:28.412221 7 log.go:181] (0xc003d20210) Data frame received for 3 I0204 14:01:28.412258 7 log.go:181] (0xc001bfb4a0) (3) Data frame handling I0204 14:01:28.412275 7 log.go:181] (0xc001bfb4a0) (3) Data frame sent I0204 14:01:28.412347 7 log.go:181] (0xc003d20210) Data frame received for 5 I0204 14:01:28.412380 7 log.go:181] (0xc00114cb40) (5) Data frame handling I0204 14:01:28.412411 7 log.go:181] (0xc00114cb40) (5) Data frame sent I0204 14:01:28.412602 7 log.go:181] (0xc003d20210) Data frame received for 5 I0204 14:01:28.412627 7 log.go:181] (0xc00114cb40) (5) Data frame handling I0204 14:01:28.412719 7 log.go:181] (0xc003d20210) Data frame received for 3 I0204 14:01:28.412739 7 log.go:181] (0xc001bfb4a0) (3) Data frame handling I0204 14:01:28.414189 7 log.go:181] (0xc003d20210) Data frame received for 1 I0204 14:01:28.414211 7 log.go:181] (0xc0039283c0) (1) Data frame handling I0204 14:01:28.414234 7 log.go:181] (0xc0039283c0) (1) Data frame sent I0204 14:01:28.414261 7 log.go:181] (0xc003d20210) (0xc0039283c0) Stream removed, broadcasting: 1 I0204 14:01:28.414280 7 log.go:181] (0xc003d20210) Go away received I0204 14:01:28.414369 7 log.go:181] (0xc003d20210) (0xc0039283c0) Stream removed, broadcasting: 1 I0204 14:01:28.414384 7 log.go:181] (0xc003d20210) (0xc001bfb4a0) Stream removed, broadcasting: 3 I0204 14:01:28.414391 7 log.go:181] (0xc003d20210) (0xc00114cb40) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Feb 4 14:01:28.414: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:28.414: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:28.443873 7 log.go:181] (0xc003b0aa50) (0xc001bfb720) Create stream I0204 14:01:28.443901 7 log.go:181] (0xc003b0aa50) (0xc001bfb720) Stream added, broadcasting: 1 I0204 14:01:28.446391 7 log.go:181] (0xc003b0aa50) Reply frame received for 1 I0204 14:01:28.446428 7 log.go:181] (0xc003b0aa50) (0xc00114cbe0) Create stream I0204 14:01:28.446446 7 log.go:181] (0xc003b0aa50) (0xc00114cbe0) Stream added, broadcasting: 3 I0204 14:01:28.447413 7 log.go:181] (0xc003b0aa50) Reply frame received for 3 I0204 14:01:28.447456 7 log.go:181] (0xc003b0aa50) (0xc003d308c0) Create stream I0204 14:01:28.447475 7 log.go:181] (0xc003b0aa50) (0xc003d308c0) Stream added, broadcasting: 5 I0204 14:01:28.448418 7 log.go:181] (0xc003b0aa50) Reply frame received for 5 I0204 14:01:28.518587 7 log.go:181] (0xc003b0aa50) Data frame received for 5 I0204 14:01:28.518625 7 log.go:181] (0xc003d308c0) (5) Data frame handling I0204 14:01:28.518662 7 log.go:181] (0xc003d308c0) (5) Data frame sent I0204 14:01:28.518683 7 log.go:181] (0xc003b0aa50) Data frame received for 5 I0204 14:01:28.518700 7 log.go:181] (0xc003d308c0) (5) Data frame handling I0204 14:01:28.518786 7 log.go:181] (0xc003d308c0) (5) Data frame sent I0204 14:01:28.518974 7 log.go:181] (0xc003b0aa50) Data frame received for 5 I0204 14:01:28.519015 7 log.go:181] (0xc003d308c0) (5) Data frame handling I0204 14:01:28.519054 7 log.go:181] (0xc003d308c0) (5) Data frame sent I0204 14:01:28.519075 7 log.go:181] (0xc003b0aa50) Data frame received for 5 I0204 14:01:28.519094 7 log.go:181] (0xc003d308c0) (5) Data frame handling I0204 14:01:28.519115 7 log.go:181] (0xc003d308c0) (5) Data frame sent I0204 14:01:28.519134 7 log.go:181] (0xc003b0aa50) Data frame received for 3 I0204 14:01:28.519152 7 log.go:181] (0xc00114cbe0) (3) Data frame handling I0204 14:01:28.519176 7 log.go:181] (0xc00114cbe0) (3) Data frame sent I0204 14:01:28.519193 7 log.go:181] (0xc003b0aa50) Data frame received for 5 I0204 14:01:28.519214 7 log.go:181] (0xc003d308c0) (5) Data frame handling I0204 14:01:28.519237 7 log.go:181] (0xc003d308c0) (5) Data frame sent I0204 14:01:28.519789 7 log.go:181] (0xc003b0aa50) Data frame received for 3 I0204 14:01:28.519832 7 log.go:181] (0xc00114cbe0) (3) Data frame handling I0204 14:01:28.520137 7 log.go:181] (0xc003b0aa50) Data frame received for 5 I0204 14:01:28.520167 7 log.go:181] (0xc003d308c0) (5) Data frame handling I0204 14:01:28.521725 7 log.go:181] (0xc003b0aa50) Data frame received for 1 I0204 14:01:28.521737 7 log.go:181] (0xc001bfb720) (1) Data frame handling I0204 14:01:28.521746 7 log.go:181] (0xc001bfb720) (1) Data frame sent I0204 14:01:28.521753 7 log.go:181] (0xc003b0aa50) (0xc001bfb720) Stream removed, broadcasting: 1 I0204 14:01:28.521813 7 log.go:181] (0xc003b0aa50) (0xc001bfb720) Stream removed, broadcasting: 1 I0204 14:01:28.521822 7 log.go:181] (0xc003b0aa50) (0xc00114cbe0) Stream removed, broadcasting: 3 I0204 14:01:28.521953 7 log.go:181] (0xc003b0aa50) Go away received I0204 14:01:28.522014 7 log.go:181] (0xc003b0aa50) (0xc003d308c0) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Feb 4 14:01:28.522: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:28.522: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:28.558463 7 log.go:181] (0xc003c06630) (0xc00114cdc0) Create stream I0204 14:01:28.558489 7 log.go:181] (0xc003c06630) (0xc00114cdc0) Stream added, broadcasting: 1 I0204 14:01:28.561118 7 log.go:181] (0xc003c06630) Reply frame received for 1 I0204 14:01:28.561161 7 log.go:181] (0xc003c06630) (0xc001bfb7c0) Create stream I0204 14:01:28.561180 7 log.go:181] (0xc003c06630) (0xc001bfb7c0) Stream added, broadcasting: 3 I0204 14:01:28.562739 7 log.go:181] (0xc003c06630) Reply frame received for 3 I0204 14:01:28.562774 7 log.go:181] (0xc003c06630) (0xc003d30960) Create stream I0204 14:01:28.562799 7 log.go:181] (0xc003c06630) (0xc003d30960) Stream added, broadcasting: 5 I0204 14:01:28.565000 7 log.go:181] (0xc003c06630) Reply frame received for 5 I0204 14:01:33.630403 7 log.go:181] (0xc003c06630) Data frame received for 3 I0204 14:01:33.630454 7 log.go:181] (0xc001bfb7c0) (3) Data frame handling I0204 14:01:33.630551 7 log.go:181] (0xc003c06630) Data frame received for 5 I0204 14:01:33.630581 7 log.go:181] (0xc003d30960) (5) Data frame handling I0204 14:01:33.630606 7 log.go:181] (0xc003d30960) (5) Data frame sent I0204 14:01:33.630965 7 log.go:181] (0xc003c06630) Data frame received for 5 I0204 14:01:33.631015 7 log.go:181] (0xc003d30960) (5) Data frame handling I0204 14:01:33.632715 7 log.go:181] (0xc003c06630) Data frame received for 1 I0204 14:01:33.632768 7 log.go:181] (0xc00114cdc0) (1) Data frame handling I0204 14:01:33.632803 7 log.go:181] (0xc00114cdc0) (1) Data frame sent I0204 14:01:33.632949 7 log.go:181] (0xc003c06630) (0xc00114cdc0) Stream removed, broadcasting: 1 I0204 14:01:33.633005 7 log.go:181] (0xc003c06630) Go away received I0204 14:01:33.633107 7 log.go:181] (0xc003c06630) (0xc00114cdc0) Stream removed, broadcasting: 1 I0204 14:01:33.633151 7 log.go:181] (0xc003c06630) (0xc001bfb7c0) Stream removed, broadcasting: 3 I0204 14:01:33.633165 7 log.go:181] (0xc003c06630) (0xc003d30960) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54321 Feb 4 14:01:33.633: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.14 http://127.0.0.1:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:33.633: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:33.666015 7 log.go:181] (0xc00094db80) (0xc00214afa0) Create stream I0204 14:01:33.666052 7 log.go:181] (0xc00094db80) (0xc00214afa0) Stream added, broadcasting: 1 I0204 14:01:33.668035 7 log.go:181] (0xc00094db80) Reply frame received for 1 I0204 14:01:33.668081 7 log.go:181] (0xc00094db80) (0xc001bfb860) Create stream I0204 14:01:33.668100 7 log.go:181] (0xc00094db80) (0xc001bfb860) Stream added, broadcasting: 3 I0204 14:01:33.669123 7 log.go:181] (0xc00094db80) Reply frame received for 3 I0204 14:01:33.669176 7 log.go:181] (0xc00094db80) (0xc00114ce60) Create stream I0204 14:01:33.669201 7 log.go:181] (0xc00094db80) (0xc00114ce60) Stream added, broadcasting: 5 I0204 14:01:33.670359 7 log.go:181] (0xc00094db80) Reply frame received for 5 I0204 14:01:33.758850 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.758957 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.758992 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759025 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759046 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759059 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759073 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759085 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759127 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759150 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759179 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759211 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759239 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759266 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759288 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759301 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759316 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759374 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759389 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759400 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759431 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759461 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759487 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759524 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759586 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759616 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759647 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759664 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759679 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759700 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759714 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759728 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759746 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759762 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.759776 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.759792 7 log.go:181] (0xc00114ce60) (5) Data frame sent I0204 14:01:33.759810 7 log.go:181] (0xc00094db80) Data frame received for 3 I0204 14:01:33.759824 7 log.go:181] (0xc001bfb860) (3) Data frame handling I0204 14:01:33.759845 7 log.go:181] (0xc001bfb860) (3) Data frame sent I0204 14:01:33.760402 7 log.go:181] (0xc00094db80) Data frame received for 5 I0204 14:01:33.760417 7 log.go:181] (0xc00114ce60) (5) Data frame handling I0204 14:01:33.760431 7 log.go:181] (0xc00094db80) Data frame received for 3 I0204 14:01:33.760465 7 log.go:181] (0xc001bfb860) (3) Data frame handling I0204 14:01:33.763880 7 log.go:181] (0xc00094db80) Data frame received for 1 I0204 14:01:33.763894 7 log.go:181] (0xc00214afa0) (1) Data frame handling I0204 14:01:33.763903 7 log.go:181] (0xc00214afa0) (1) Data frame sent I0204 14:01:33.763912 7 log.go:181] (0xc00094db80) (0xc00214afa0) Stream removed, broadcasting: 1 I0204 14:01:33.763982 7 log.go:181] (0xc00094db80) (0xc00214afa0) Stream removed, broadcasting: 1 I0204 14:01:33.763996 7 log.go:181] (0xc00094db80) (0xc001bfb860) Stream removed, broadcasting: 3 I0204 14:01:33.764014 7 log.go:181] (0xc00094db80) Go away received I0204 14:01:33.764048 7 log.go:181] (0xc00094db80) (0xc00114ce60) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 Feb 4 14:01:33.764: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.14:54321/hostname] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:33.764: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:33.802066 7 log.go:181] (0xc003fca2c0) (0xc00214b220) Create stream I0204 14:01:33.802105 7 log.go:181] (0xc003fca2c0) (0xc00214b220) Stream added, broadcasting: 1 I0204 14:01:33.804680 7 log.go:181] (0xc003fca2c0) Reply frame received for 1 I0204 14:01:33.804710 7 log.go:181] (0xc003fca2c0) (0xc00214b2c0) Create stream I0204 14:01:33.804724 7 log.go:181] (0xc003fca2c0) (0xc00214b2c0) Stream added, broadcasting: 3 I0204 14:01:33.805748 7 log.go:181] (0xc003fca2c0) Reply frame received for 3 I0204 14:01:33.805791 7 log.go:181] (0xc003fca2c0) (0xc001bfb900) Create stream I0204 14:01:33.805808 7 log.go:181] (0xc003fca2c0) (0xc001bfb900) Stream added, broadcasting: 5 I0204 14:01:33.806698 7 log.go:181] (0xc003fca2c0) Reply frame received for 5 I0204 14:01:33.888783 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.888814 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.888824 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.888830 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.888898 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.888909 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.888916 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.888938 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.889007 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.889015 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.889019 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.889044 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.889052 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.889056 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.889065 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.889069 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.889074 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.889086 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.889447 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.889476 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.889495 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.889511 7 log.go:181] (0xc003fca2c0) Data frame received for 3 I0204 14:01:33.889530 7 log.go:181] (0xc00214b2c0) (3) Data frame handling I0204 14:01:33.889539 7 log.go:181] (0xc00214b2c0) (3) Data frame sent I0204 14:01:33.889553 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.889563 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.889578 7 log.go:181] (0xc001bfb900) (5) Data frame sent I0204 14:01:33.890011 7 log.go:181] (0xc003fca2c0) Data frame received for 3 I0204 14:01:33.890028 7 log.go:181] (0xc00214b2c0) (3) Data frame handling I0204 14:01:33.890197 7 log.go:181] (0xc003fca2c0) Data frame received for 5 I0204 14:01:33.890206 7 log.go:181] (0xc001bfb900) (5) Data frame handling I0204 14:01:33.892187 7 log.go:181] (0xc003fca2c0) Data frame received for 1 I0204 14:01:33.892211 7 log.go:181] (0xc00214b220) (1) Data frame handling I0204 14:01:33.892234 7 log.go:181] (0xc00214b220) (1) Data frame sent I0204 14:01:33.892248 7 log.go:181] (0xc003fca2c0) (0xc00214b220) Stream removed, broadcasting: 1 I0204 14:01:33.892320 7 log.go:181] (0xc003fca2c0) (0xc00214b220) Stream removed, broadcasting: 1 I0204 14:01:33.892332 7 log.go:181] (0xc003fca2c0) (0xc00214b2c0) Stream removed, broadcasting: 3 I0204 14:01:33.892421 7 log.go:181] (0xc003fca2c0) Go away received I0204 14:01:33.892514 7 log.go:181] (0xc003fca2c0) (0xc001bfb900) Stream removed, broadcasting: 5 STEP: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.14, port: 54321 UDP Feb 4 14:01:33.892: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.14 54321] Namespace:sched-pred-3806 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:01:33.892: INFO: >>> kubeConfig: /root/.kube/config I0204 14:01:33.925875 7 log.go:181] (0xc003fca9a0) (0xc00214b540) Create stream I0204 14:01:33.925909 7 log.go:181] (0xc003fca9a0) (0xc00214b540) Stream added, broadcasting: 1 I0204 14:01:33.928485 7 log.go:181] (0xc003fca9a0) Reply frame received for 1 I0204 14:01:33.928532 7 log.go:181] (0xc003fca9a0) (0xc001bfb9a0) Create stream I0204 14:01:33.928548 7 log.go:181] (0xc003fca9a0) (0xc001bfb9a0) Stream added, broadcasting: 3 I0204 14:01:33.929651 7 log.go:181] (0xc003fca9a0) Reply frame received for 3 I0204 14:01:33.929675 7 log.go:181] (0xc003fca9a0) (0xc003928460) Create stream I0204 14:01:33.929683 7 log.go:181] (0xc003fca9a0) (0xc003928460) Stream added, broadcasting: 5 I0204 14:01:33.930748 7 log.go:181] (0xc003fca9a0) Reply frame received for 5 I0204 14:01:38.994254 7 log.go:181] (0xc003fca9a0) Data frame received for 5 I0204 14:01:38.994294 7 log.go:181] (0xc003928460) (5) Data frame handling I0204 14:01:38.994311 7 log.go:181] (0xc003928460) (5) Data frame sent I0204 14:01:38.994335 7 log.go:181] (0xc003fca9a0) Data frame received for 5 I0204 14:01:38.994355 7 log.go:181] (0xc003928460) (5) Data frame handling I0204 14:01:38.994384 7 log.go:181] (0xc003fca9a0) Data frame received for 3 I0204 14:01:38.994426 7 log.go:181] (0xc001bfb9a0) (3) Data frame handling I0204 14:01:38.995870 7 log.go:181] (0xc003fca9a0) Data frame received for 1 I0204 14:01:38.995922 7 log.go:181] (0xc00214b540) (1) Data frame handling I0204 14:01:38.995958 7 log.go:181] (0xc00214b540) (1) Data frame sent I0204 14:01:38.995982 7 log.go:181] (0xc003fca9a0) (0xc00214b540) Stream removed, broadcasting: 1 I0204 14:01:38.996019 7 log.go:181] (0xc003fca9a0) Go away received I0204 14:01:38.996081 7 log.go:181] (0xc003fca9a0) (0xc00214b540) Stream removed, broadcasting: 1 I0204 14:01:38.996102 7 log.go:181] (0xc003fca9a0) (0xc001bfb9a0) Stream removed, broadcasting: 3 I0204 14:01:38.996127 7 log.go:181] (0xc003fca9a0) (0xc003928460) Stream removed, broadcasting: 5 STEP: removing the label kubernetes.io/e2e-75d1abf9-7ce7-450f-afdc-b8c215fd0af0 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-75d1abf9-7ce7-450f-afdc-b8c215fd0af0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:01:39.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3806" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:52.261 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":311,"completed":202,"skipped":3616,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:01:39.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:01:39.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6" in namespace "downward-api-3662" to be "Succeeded or Failed" Feb 4 14:01:39.195: INFO: Pod "downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.302771ms Feb 4 14:01:41.266: INFO: Pod "downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081342891s Feb 4 14:01:43.271: INFO: Pod "downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086126329s STEP: Saw pod success Feb 4 14:01:43.271: INFO: Pod "downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6" satisfied condition "Succeeded or Failed" Feb 4 14:01:43.274: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6 container client-container: STEP: delete the pod Feb 4 14:01:43.374: INFO: Waiting for pod downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6 to disappear Feb 4 14:01:43.378: INFO: Pod downwardapi-volume-84dbfd59-6a32-40ee-bd12-8d805c4a18d6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:01:43.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3662" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":203,"skipped":3618,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:01:43.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Feb 4 14:01:43.525: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:01:56.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6155" for this suite. • [SLOW TEST:13.563 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":311,"completed":204,"skipped":3639,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:01:56.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 4 14:01:57.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7467 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Feb 4 14:02:00.177: INFO: stderr: "" Feb 4 14:02:00.177: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Feb 4 14:02:05.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7467 get pod e2e-test-httpd-pod -o json' Feb 4 14:02:05.341: INFO: stderr: "" Feb 4 14:02:05.341: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2021-02-04T14:02:00Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2021-02-04T14:02:00Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.123\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2021-02-04T14:02:03Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7467\",\n \"resourceVersion\": \"2105152\",\n \"uid\": \"5b2525be-885c-4df9-a605-567dff01e993\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-7ncd2\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-7ncd2\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-7ncd2\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-04T14:02:00Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-04T14:02:03Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-04T14:02:03Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2021-02-04T14:02:00Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d9844f390c5744917f723b8f4de0171635a0847959e46a970b77b146e9275097\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2021-02-04T14:02:03Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.123\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.123\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2021-02-04T14:02:00Z\"\n }\n}\n" STEP: replace the image in the pod Feb 4 14:02:05.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7467 replace -f -' Feb 4 14:02:05.730: INFO: stderr: "" Feb 4 14:02:05.730: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Feb 4 14:02:05.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-7467 delete pods e2e-test-httpd-pod' Feb 4 14:02:10.716: INFO: stderr: "" Feb 4 14:02:10.716: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:02:10.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7467" for this suite. • [SLOW TEST:13.833 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1551 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":311,"completed":205,"skipped":3647,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:02:10.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: starting the proxy server Feb 4 14:02:10.843: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2249 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:02:10.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2249" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":311,"completed":206,"skipped":3656,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:02:10.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service in namespace services-9584 Feb 4 14:02:15.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9584 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Feb 4 14:02:15.367: INFO: stderr: "I0204 14:02:15.275908 3248 log.go:181] (0xc000141080) (0xc000e1c3c0) Create stream\nI0204 14:02:15.275966 3248 log.go:181] (0xc000141080) (0xc000e1c3c0) Stream added, broadcasting: 1\nI0204 14:02:15.277882 3248 log.go:181] (0xc000141080) Reply frame received for 1\nI0204 14:02:15.277932 3248 log.go:181] (0xc000141080) (0xc000ab0000) Create stream\nI0204 14:02:15.277953 3248 log.go:181] (0xc000141080) (0xc000ab0000) Stream added, broadcasting: 3\nI0204 14:02:15.278723 3248 log.go:181] (0xc000141080) Reply frame received for 3\nI0204 14:02:15.278773 3248 log.go:181] (0xc000141080) (0xc000ab00a0) Create stream\nI0204 14:02:15.278798 3248 log.go:181] (0xc000141080) (0xc000ab00a0) Stream added, broadcasting: 5\nI0204 14:02:15.279542 3248 log.go:181] (0xc000141080) Reply frame received for 5\nI0204 14:02:15.358904 3248 log.go:181] (0xc000141080) Data frame received for 5\nI0204 14:02:15.358931 3248 log.go:181] (0xc000ab00a0) (5) Data frame handling\nI0204 14:02:15.358949 3248 log.go:181] (0xc000ab00a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0204 14:02:15.359586 3248 log.go:181] (0xc000141080) Data frame received for 3\nI0204 14:02:15.359608 3248 log.go:181] (0xc000ab0000) (3) Data frame handling\nI0204 14:02:15.359621 3248 log.go:181] (0xc000ab0000) (3) Data frame sent\nI0204 14:02:15.360047 3248 log.go:181] (0xc000141080) Data frame received for 3\nI0204 14:02:15.360058 3248 log.go:181] (0xc000ab0000) (3) Data frame handling\nI0204 14:02:15.360223 3248 log.go:181] (0xc000141080) Data frame received for 5\nI0204 14:02:15.360233 3248 log.go:181] (0xc000ab00a0) (5) Data frame handling\nI0204 14:02:15.361545 3248 log.go:181] (0xc000141080) Data frame received for 1\nI0204 14:02:15.361562 3248 log.go:181] (0xc000e1c3c0) (1) Data frame handling\nI0204 14:02:15.361572 3248 log.go:181] (0xc000e1c3c0) (1) Data frame sent\nI0204 14:02:15.361586 3248 log.go:181] (0xc000141080) (0xc000e1c3c0) Stream removed, broadcasting: 1\nI0204 14:02:15.361604 3248 log.go:181] (0xc000141080) Go away received\nI0204 14:02:15.361880 3248 log.go:181] (0xc000141080) (0xc000e1c3c0) Stream removed, broadcasting: 1\nI0204 14:02:15.361893 3248 log.go:181] (0xc000141080) (0xc000ab0000) Stream removed, broadcasting: 3\nI0204 14:02:15.361899 3248 log.go:181] (0xc000141080) (0xc000ab00a0) Stream removed, broadcasting: 5\n" Feb 4 14:02:15.367: INFO: stdout: "iptables" Feb 4 14:02:15.367: INFO: proxyMode: iptables Feb 4 14:02:15.420: INFO: Waiting for pod kube-proxy-mode-detector to disappear Feb 4 14:02:15.434: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9584 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9584 I0204 14:02:15.476351 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9584, replica count: 3 I0204 14:02:18.526746 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 14:02:21.527083 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 14:02:21.534: INFO: Creating new exec pod Feb 4 14:02:26.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9584 exec execpod-affinity8jvqd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Feb 4 14:02:26.807: INFO: stderr: "I0204 14:02:26.711549 3266 log.go:181] (0xc00003a580) (0xc00053a780) Create stream\nI0204 14:02:26.711610 3266 log.go:181] (0xc00003a580) (0xc00053a780) Stream added, broadcasting: 1\nI0204 14:02:26.713405 3266 log.go:181] (0xc00003a580) Reply frame received for 1\nI0204 14:02:26.713452 3266 log.go:181] (0xc00003a580) (0xc0005ac000) Create stream\nI0204 14:02:26.713476 3266 log.go:181] (0xc00003a580) (0xc0005ac000) Stream added, broadcasting: 3\nI0204 14:02:26.714395 3266 log.go:181] (0xc00003a580) Reply frame received for 3\nI0204 14:02:26.714447 3266 log.go:181] (0xc00003a580) (0xc00061e000) Create stream\nI0204 14:02:26.714487 3266 log.go:181] (0xc00003a580) (0xc00061e000) Stream added, broadcasting: 5\nI0204 14:02:26.715493 3266 log.go:181] (0xc00003a580) Reply frame received for 5\nI0204 14:02:26.798941 3266 log.go:181] (0xc00003a580) Data frame received for 3\nI0204 14:02:26.799036 3266 log.go:181] (0xc0005ac000) (3) Data frame handling\nI0204 14:02:26.799123 3266 log.go:181] (0xc00003a580) Data frame received for 5\nI0204 14:02:26.799175 3266 log.go:181] (0xc00061e000) (5) Data frame handling\nI0204 14:02:26.799215 3266 log.go:181] (0xc00061e000) (5) Data frame sent\nI0204 14:02:26.799233 3266 log.go:181] (0xc00003a580) Data frame received for 5\nI0204 14:02:26.799250 3266 log.go:181] (0xc00061e000) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0204 14:02:26.802129 3266 log.go:181] (0xc00003a580) Data frame received for 1\nI0204 14:02:26.802170 3266 log.go:181] (0xc00053a780) (1) Data frame handling\nI0204 14:02:26.802198 3266 log.go:181] (0xc00053a780) (1) Data frame sent\nI0204 14:02:26.802230 3266 log.go:181] (0xc00003a580) (0xc00053a780) Stream removed, broadcasting: 1\nI0204 14:02:26.802373 3266 log.go:181] (0xc00003a580) Go away received\nI0204 14:02:26.802911 3266 log.go:181] (0xc00003a580) (0xc00053a780) Stream removed, broadcasting: 1\nI0204 14:02:26.802937 3266 log.go:181] (0xc00003a580) (0xc0005ac000) Stream removed, broadcasting: 3\nI0204 14:02:26.802951 3266 log.go:181] (0xc00003a580) (0xc00061e000) Stream removed, broadcasting: 5\n" Feb 4 14:02:26.807: INFO: stdout: "" Feb 4 14:02:26.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9584 exec execpod-affinity8jvqd -- /bin/sh -x -c nc -zv -t -w 2 10.96.121.251 80' Feb 4 14:02:27.004: INFO: stderr: "I0204 14:02:26.937060 3281 log.go:181] (0xc0000f8000) (0xc000aa8000) Create stream\nI0204 14:02:26.937121 3281 log.go:181] (0xc0000f8000) (0xc000aa8000) Stream added, broadcasting: 1\nI0204 14:02:26.938891 3281 log.go:181] (0xc0000f8000) Reply frame received for 1\nI0204 14:02:26.938945 3281 log.go:181] (0xc0000f8000) (0xc0008666e0) Create stream\nI0204 14:02:26.938964 3281 log.go:181] (0xc0000f8000) (0xc0008666e0) Stream added, broadcasting: 3\nI0204 14:02:26.939912 3281 log.go:181] (0xc0000f8000) Reply frame received for 3\nI0204 14:02:26.939945 3281 log.go:181] (0xc0000f8000) (0xc000aa80a0) Create stream\nI0204 14:02:26.939954 3281 log.go:181] (0xc0000f8000) (0xc000aa80a0) Stream added, broadcasting: 5\nI0204 14:02:26.941121 3281 log.go:181] (0xc0000f8000) Reply frame received for 5\nI0204 14:02:26.996235 3281 log.go:181] (0xc0000f8000) Data frame received for 5\nI0204 14:02:26.996274 3281 log.go:181] (0xc000aa80a0) (5) Data frame handling\nI0204 14:02:26.996285 3281 log.go:181] (0xc000aa80a0) (5) Data frame sent\nI0204 14:02:26.996293 3281 log.go:181] (0xc0000f8000) Data frame received for 5\nI0204 14:02:26.996299 3281 log.go:181] (0xc000aa80a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.121.251 80\nConnection to 10.96.121.251 80 port [tcp/http] succeeded!\nI0204 14:02:26.996363 3281 log.go:181] (0xc0000f8000) Data frame received for 3\nI0204 14:02:26.996384 3281 log.go:181] (0xc0008666e0) (3) Data frame handling\nI0204 14:02:26.998217 3281 log.go:181] (0xc0000f8000) Data frame received for 1\nI0204 14:02:26.998242 3281 log.go:181] (0xc000aa8000) (1) Data frame handling\nI0204 14:02:26.998255 3281 log.go:181] (0xc000aa8000) (1) Data frame sent\nI0204 14:02:26.998273 3281 log.go:181] (0xc0000f8000) (0xc000aa8000) Stream removed, broadcasting: 1\nI0204 14:02:26.998292 3281 log.go:181] (0xc0000f8000) Go away received\nI0204 14:02:26.998715 3281 log.go:181] (0xc0000f8000) (0xc000aa8000) Stream removed, broadcasting: 1\nI0204 14:02:26.998733 3281 log.go:181] (0xc0000f8000) (0xc0008666e0) Stream removed, broadcasting: 3\nI0204 14:02:26.998741 3281 log.go:181] (0xc0000f8000) (0xc000aa80a0) Stream removed, broadcasting: 5\n" Feb 4 14:02:27.004: INFO: stdout: "" Feb 4 14:02:27.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9584 exec execpod-affinity8jvqd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.121.251:80/ ; done' Feb 4 14:02:27.297: INFO: stderr: "I0204 14:02:27.136037 3299 log.go:181] (0xc00003a420) (0xc000bae280) Create stream\nI0204 14:02:27.136109 3299 log.go:181] (0xc00003a420) (0xc000bae280) Stream added, broadcasting: 1\nI0204 14:02:27.141310 3299 log.go:181] (0xc00003a420) Reply frame received for 1\nI0204 14:02:27.141384 3299 log.go:181] (0xc00003a420) (0xc000a10960) Create stream\nI0204 14:02:27.141407 3299 log.go:181] (0xc00003a420) (0xc000a10960) Stream added, broadcasting: 3\nI0204 14:02:27.144707 3299 log.go:181] (0xc00003a420) Reply frame received for 3\nI0204 14:02:27.144747 3299 log.go:181] (0xc00003a420) (0xc000b341e0) Create stream\nI0204 14:02:27.144754 3299 log.go:181] (0xc00003a420) (0xc000b341e0) Stream added, broadcasting: 5\nI0204 14:02:27.145599 3299 log.go:181] (0xc00003a420) Reply frame received for 5\nI0204 14:02:27.207775 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.207817 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.207835 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.207878 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.207899 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.207921 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.214070 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.214096 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.214141 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.214335 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.214357 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.214369 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\nI0204 14:02:27.214381 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.214389 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.214408 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\nI0204 14:02:27.214432 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.214446 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.214452 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.217534 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.217561 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.217588 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.218485 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.218501 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.218518 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.218542 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.218568 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.218591 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\nI0204 14:02:27.222125 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.222152 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.222174 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.223001 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.223016 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.223024 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.223038 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.223048 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.223055 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.229306 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.229331 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.229350 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.230263 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.230292 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.230302 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.230315 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.230323 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.230330 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.235143 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.235175 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.235200 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.235950 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.235972 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.236001 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.236009 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.236018 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.236024 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.240398 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.240421 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.240440 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.241379 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.241401 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.241414 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.241436 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.241470 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.241497 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.246223 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.246241 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.246250 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.246711 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.246728 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.246742 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.246750 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\nI0204 14:02:27.246757 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.246764 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.246779 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\nI0204 14:02:27.246788 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.246795 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.250620 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.250640 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.250658 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.251685 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.251707 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.251735 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.251755 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.251766 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.251783 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.255540 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.255584 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.255611 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.255634 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.255654 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.255676 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.255700 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.255719 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.255739 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\nI0204 14:02:27.255791 3299 log.go:181] (0xc00003a420) Data frame received for 5\n+ echo\nI0204 14:02:27.255808 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.255830 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.260621 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.260644 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.260661 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.261957 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.261993 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.262028 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.262049 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.262077 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.262105 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.265692 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.265727 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.265743 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.266017 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.266039 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.266049 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.266065 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.266074 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.266085 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.270811 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.270825 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.270837 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.271608 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.271619 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.271626 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.271632 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.271637 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.271642 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.274383 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.274395 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.274403 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.274713 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.274726 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.274733 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.274745 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.274760 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.274769 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.279221 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.279233 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.279240 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.279919 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.279937 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.279956 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.279982 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.280001 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.280025 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\nI0204 14:02:27.284079 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.284103 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.284130 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.284590 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.284611 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.284623 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.284641 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.284654 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.284690 3299 log.go:181] (0xc000b341e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.288129 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.288165 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.288201 3299 log.go:181] (0xc000a10960) (3) Data frame sent\nI0204 14:02:27.288647 3299 log.go:181] (0xc00003a420) Data frame received for 3\nI0204 14:02:27.288678 3299 log.go:181] (0xc000a10960) (3) Data frame handling\nI0204 14:02:27.288742 3299 log.go:181] (0xc00003a420) Data frame received for 5\nI0204 14:02:27.288766 3299 log.go:181] (0xc000b341e0) (5) Data frame handling\nI0204 14:02:27.290702 3299 log.go:181] (0xc00003a420) Data frame received for 1\nI0204 14:02:27.290728 3299 log.go:181] (0xc000bae280) (1) Data frame handling\nI0204 14:02:27.290746 3299 log.go:181] (0xc000bae280) (1) Data frame sent\nI0204 14:02:27.290764 3299 log.go:181] (0xc00003a420) (0xc000bae280) Stream removed, broadcasting: 1\nI0204 14:02:27.290844 3299 log.go:181] (0xc00003a420) Go away received\nI0204 14:02:27.291230 3299 log.go:181] (0xc00003a420) (0xc000bae280) Stream removed, broadcasting: 1\nI0204 14:02:27.291248 3299 log.go:181] (0xc00003a420) (0xc000a10960) Stream removed, broadcasting: 3\nI0204 14:02:27.291257 3299 log.go:181] (0xc00003a420) (0xc000b341e0) Stream removed, broadcasting: 5\n" Feb 4 14:02:27.298: INFO: stdout: "\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6\naffinity-clusterip-timeout-2wsk6" Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Received response from host: affinity-clusterip-timeout-2wsk6 Feb 4 14:02:27.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9584 exec execpod-affinity8jvqd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.121.251:80/' Feb 4 14:02:27.518: INFO: stderr: "I0204 14:02:27.438499 3317 log.go:181] (0xc000271130) (0xc000b7c3c0) Create stream\nI0204 14:02:27.438557 3317 log.go:181] (0xc000271130) (0xc000b7c3c0) Stream added, broadcasting: 1\nI0204 14:02:27.440188 3317 log.go:181] (0xc000271130) Reply frame received for 1\nI0204 14:02:27.440238 3317 log.go:181] (0xc000271130) (0xc00063a500) Create stream\nI0204 14:02:27.440276 3317 log.go:181] (0xc000271130) (0xc00063a500) Stream added, broadcasting: 3\nI0204 14:02:27.441151 3317 log.go:181] (0xc000271130) Reply frame received for 3\nI0204 14:02:27.441181 3317 log.go:181] (0xc000271130) (0xc000b7c460) Create stream\nI0204 14:02:27.441194 3317 log.go:181] (0xc000271130) (0xc000b7c460) Stream added, broadcasting: 5\nI0204 14:02:27.442087 3317 log.go:181] (0xc000271130) Reply frame received for 5\nI0204 14:02:27.510218 3317 log.go:181] (0xc000271130) Data frame received for 5\nI0204 14:02:27.510243 3317 log.go:181] (0xc000b7c460) (5) Data frame handling\nI0204 14:02:27.510257 3317 log.go:181] (0xc000b7c460) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:27.510661 3317 log.go:181] (0xc000271130) Data frame received for 3\nI0204 14:02:27.510678 3317 log.go:181] (0xc00063a500) (3) Data frame handling\nI0204 14:02:27.510692 3317 log.go:181] (0xc00063a500) (3) Data frame sent\nI0204 14:02:27.511404 3317 log.go:181] (0xc000271130) Data frame received for 5\nI0204 14:02:27.511483 3317 log.go:181] (0xc000b7c460) (5) Data frame handling\nI0204 14:02:27.511537 3317 log.go:181] (0xc000271130) Data frame received for 3\nI0204 14:02:27.511554 3317 log.go:181] (0xc00063a500) (3) Data frame handling\nI0204 14:02:27.513250 3317 log.go:181] (0xc000271130) Data frame received for 1\nI0204 14:02:27.513266 3317 log.go:181] (0xc000b7c3c0) (1) Data frame handling\nI0204 14:02:27.513273 3317 log.go:181] (0xc000b7c3c0) (1) Data frame sent\nI0204 14:02:27.513280 3317 log.go:181] (0xc000271130) (0xc000b7c3c0) Stream removed, broadcasting: 1\nI0204 14:02:27.513506 3317 log.go:181] (0xc000271130) Go away received\nI0204 14:02:27.513573 3317 log.go:181] (0xc000271130) (0xc000b7c3c0) Stream removed, broadcasting: 1\nI0204 14:02:27.513590 3317 log.go:181] (0xc000271130) (0xc00063a500) Stream removed, broadcasting: 3\nI0204 14:02:27.513595 3317 log.go:181] (0xc000271130) (0xc000b7c460) Stream removed, broadcasting: 5\n" Feb 4 14:02:27.519: INFO: stdout: "affinity-clusterip-timeout-2wsk6" Feb 4 14:02:47.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-9584 exec execpod-affinity8jvqd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.96.121.251:80/' Feb 4 14:02:47.744: INFO: stderr: "I0204 14:02:47.658520 3335 log.go:181] (0xc00063e000) (0xc000636000) Create stream\nI0204 14:02:47.658579 3335 log.go:181] (0xc00063e000) (0xc000636000) Stream added, broadcasting: 1\nI0204 14:02:47.667725 3335 log.go:181] (0xc00063e000) Reply frame received for 1\nI0204 14:02:47.667767 3335 log.go:181] (0xc00063e000) (0xc0009c6280) Create stream\nI0204 14:02:47.667776 3335 log.go:181] (0xc00063e000) (0xc0009c6280) Stream added, broadcasting: 3\nI0204 14:02:47.669302 3335 log.go:181] (0xc00063e000) Reply frame received for 3\nI0204 14:02:47.669331 3335 log.go:181] (0xc00063e000) (0xc0002861e0) Create stream\nI0204 14:02:47.669340 3335 log.go:181] (0xc00063e000) (0xc0002861e0) Stream added, broadcasting: 5\nI0204 14:02:47.670081 3335 log.go:181] (0xc00063e000) Reply frame received for 5\nI0204 14:02:47.730965 3335 log.go:181] (0xc00063e000) Data frame received for 5\nI0204 14:02:47.731015 3335 log.go:181] (0xc0002861e0) (5) Data frame handling\nI0204 14:02:47.731053 3335 log.go:181] (0xc0002861e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.96.121.251:80/\nI0204 14:02:47.735570 3335 log.go:181] (0xc00063e000) Data frame received for 3\nI0204 14:02:47.735606 3335 log.go:181] (0xc0009c6280) (3) Data frame handling\nI0204 14:02:47.735629 3335 log.go:181] (0xc0009c6280) (3) Data frame sent\nI0204 14:02:47.736449 3335 log.go:181] (0xc00063e000) Data frame received for 3\nI0204 14:02:47.736496 3335 log.go:181] (0xc0009c6280) (3) Data frame handling\nI0204 14:02:47.736604 3335 log.go:181] (0xc00063e000) Data frame received for 5\nI0204 14:02:47.736644 3335 log.go:181] (0xc0002861e0) (5) Data frame handling\nI0204 14:02:47.738840 3335 log.go:181] (0xc00063e000) Data frame received for 1\nI0204 14:02:47.738879 3335 log.go:181] (0xc000636000) (1) Data frame handling\nI0204 14:02:47.738914 3335 log.go:181] (0xc000636000) (1) Data frame sent\nI0204 14:02:47.738938 3335 log.go:181] (0xc00063e000) (0xc000636000) Stream removed, broadcasting: 1\nI0204 14:02:47.738964 3335 log.go:181] (0xc00063e000) Go away received\nI0204 14:02:47.739516 3335 log.go:181] (0xc00063e000) (0xc000636000) Stream removed, broadcasting: 1\nI0204 14:02:47.739541 3335 log.go:181] (0xc00063e000) (0xc0009c6280) Stream removed, broadcasting: 3\nI0204 14:02:47.739551 3335 log.go:181] (0xc00063e000) (0xc0002861e0) Stream removed, broadcasting: 5\n" Feb 4 14:02:47.744: INFO: stdout: "affinity-clusterip-timeout-4gkj6" Feb 4 14:02:47.744: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9584, will wait for the garbage collector to delete the pods Feb 4 14:02:47.858: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 7.524324ms Feb 4 14:02:48.458: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.19722ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:03:40.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9584" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:89.983 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":311,"completed":207,"skipped":3675,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:03:40.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:03:52.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2049" for this suite. • [SLOW TEST:11.720 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":311,"completed":208,"skipped":3676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:03:52.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating the pod Feb 4 14:03:57.296: INFO: Successfully updated pod "annotationupdate7fd864a8-47ba-4087-95b8-ffb0e7ac4070" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:01.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2390" for this suite. • [SLOW TEST:8.753 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":311,"completed":209,"skipped":3699,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:01.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:01.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5429" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":311,"completed":210,"skipped":3700,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:01.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 14:04:02.209: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 14:04:04.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044242, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044242, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044242, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044242, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:04:07.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:17.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1230" for this suite. STEP: Destroying namespace "webhook-1230-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:16.031 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":311,"completed":211,"skipped":3709,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:17.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating Pod STEP: Reading file content from the nginx-container Feb 4 14:04:23.709: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2161 PodName:pod-sharedvolume-d1c99641-9fe5-41ac-8699-08b2d990bb3d ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:04:23.709: INFO: >>> kubeConfig: /root/.kube/config I0204 14:04:23.743414 7 log.go:181] (0xc0009d3ad0) (0xc004705220) Create stream I0204 14:04:23.743445 7 log.go:181] (0xc0009d3ad0) (0xc004705220) Stream added, broadcasting: 1 I0204 14:04:23.745587 7 log.go:181] (0xc0009d3ad0) Reply frame received for 1 I0204 14:04:23.745620 7 log.go:181] (0xc0009d3ad0) (0xc001360000) Create stream I0204 14:04:23.745631 7 log.go:181] (0xc0009d3ad0) (0xc001360000) Stream added, broadcasting: 3 I0204 14:04:23.746749 7 log.go:181] (0xc0009d3ad0) Reply frame received for 3 I0204 14:04:23.746779 7 log.go:181] (0xc0009d3ad0) (0xc003683d60) Create stream I0204 14:04:23.746790 7 log.go:181] (0xc0009d3ad0) (0xc003683d60) Stream added, broadcasting: 5 I0204 14:04:23.747946 7 log.go:181] (0xc0009d3ad0) Reply frame received for 5 I0204 14:04:23.842476 7 log.go:181] (0xc0009d3ad0) Data frame received for 3 I0204 14:04:23.842514 7 log.go:181] (0xc001360000) (3) Data frame handling I0204 14:04:23.842525 7 log.go:181] (0xc001360000) (3) Data frame sent I0204 14:04:23.842568 7 log.go:181] (0xc0009d3ad0) Data frame received for 5 I0204 14:04:23.842602 7 log.go:181] (0xc003683d60) (5) Data frame handling I0204 14:04:23.842661 7 log.go:181] (0xc0009d3ad0) Data frame received for 3 I0204 14:04:23.842686 7 log.go:181] (0xc001360000) (3) Data frame handling I0204 14:04:23.844570 7 log.go:181] (0xc0009d3ad0) Data frame received for 1 I0204 14:04:23.844591 7 log.go:181] (0xc004705220) (1) Data frame handling I0204 14:04:23.844606 7 log.go:181] (0xc004705220) (1) Data frame sent I0204 14:04:23.844619 7 log.go:181] (0xc0009d3ad0) (0xc004705220) Stream removed, broadcasting: 1 I0204 14:04:23.844668 7 log.go:181] (0xc0009d3ad0) Go away received I0204 14:04:23.844701 7 log.go:181] (0xc0009d3ad0) (0xc004705220) Stream removed, broadcasting: 1 I0204 14:04:23.844737 7 log.go:181] (0xc0009d3ad0) (0xc001360000) Stream removed, broadcasting: 3 I0204 14:04:23.844811 7 log.go:181] (0xc0009d3ad0) (0xc003683d60) Stream removed, broadcasting: 5 Feb 4 14:04:23.844: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:23.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2161" for this suite. • [SLOW TEST:6.248 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":311,"completed":212,"skipped":3712,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:23.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 4 14:04:24.008: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6579 a884eb0b-adc4-4843-a208-7ee5a1f0dc4a 2105779 0 2021-02-04 14:04:23 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-02-04 14:04:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 4 14:04:24.008: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6579 a884eb0b-adc4-4843-a208-7ee5a1f0dc4a 2105780 0 2021-02-04 14:04:23 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2021-02-04 14:04:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:24.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6579" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":311,"completed":213,"skipped":3749,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:24.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 4 14:04:24.117: INFO: Waiting up to 5m0s for pod "pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b" in namespace "emptydir-1716" to be "Succeeded or Failed" Feb 4 14:04:24.128: INFO: Pod "pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.201182ms Feb 4 14:04:26.211: INFO: Pod "pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094328506s Feb 4 14:04:28.217: INFO: Pod "pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099593282s Feb 4 14:04:30.222: INFO: Pod "pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10500071s STEP: Saw pod success Feb 4 14:04:30.222: INFO: Pod "pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b" satisfied condition "Succeeded or Failed" Feb 4 14:04:30.225: INFO: Trying to get logs from node latest-worker pod pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b container test-container: STEP: delete the pod Feb 4 14:04:30.297: INFO: Waiting for pod pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b to disappear Feb 4 14:04:30.318: INFO: Pod pod-66520de1-d40c-4f89-9bc3-7e78acfcac9b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:30.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1716" for this suite. • [SLOW TEST:6.313 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":214,"skipped":3749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:30.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1812.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1812.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1812.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1812.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1812.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1812.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 14:04:38.625: INFO: DNS probes using dns-1812/dns-test-33de924f-c48d-43c1-bfbd-cbd4aae66ff1 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:38.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1812" for this suite. • [SLOW TEST:8.455 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":311,"completed":215,"skipped":3774,"failed":0} [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:38.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:39.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-765" for this suite. STEP: Destroying namespace "nspatchtest-623d10b6-677d-4536-907d-ea24000cc58c-6961" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":311,"completed":216,"skipped":3774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:39.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 14:04:40.385: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 14:04:42.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044280, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044280, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044280, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044280, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:04:45.437: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:45.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1484" for this suite. STEP: Destroying namespace "webhook-1484-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:5.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":311,"completed":217,"skipped":3802,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:45.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:04:45.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda" in namespace "projected-4589" to be "Succeeded or Failed" Feb 4 14:04:45.823: INFO: Pod "downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda": Phase="Pending", Reason="", readiness=false. Elapsed: 49.403297ms Feb 4 14:04:47.827: INFO: Pod "downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053660197s Feb 4 14:04:49.832: INFO: Pod "downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058004825s STEP: Saw pod success Feb 4 14:04:49.832: INFO: Pod "downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda" satisfied condition "Succeeded or Failed" Feb 4 14:04:49.835: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda container client-container: STEP: delete the pod Feb 4 14:04:49.926: INFO: Waiting for pod downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda to disappear Feb 4 14:04:49.932: INFO: Pod downwardapi-volume-9e7bff39-c4fb-4a62-a62a-ba4ba62e1dda no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:49.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4589" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":218,"skipped":3803,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:50.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-upd-2d9d498c-5352-4118-935c-50ac4ef2a54f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:04:56.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-995" for this suite. • [SLOW TEST:6.109 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":219,"skipped":3820,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:04:56.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Feb 4 14:04:56.305: INFO: Waiting up to 5m0s for pod "downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159" in namespace "downward-api-2824" to be "Succeeded or Failed" Feb 4 14:04:56.308: INFO: Pod "downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.791954ms Feb 4 14:04:58.313: INFO: Pod "downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007669801s Feb 4 14:05:00.318: INFO: Pod "downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159": Phase="Running", Reason="", readiness=true. Elapsed: 4.012381621s Feb 4 14:05:02.323: INFO: Pod "downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017911188s STEP: Saw pod success Feb 4 14:05:02.323: INFO: Pod "downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159" satisfied condition "Succeeded or Failed" Feb 4 14:05:02.326: INFO: Trying to get logs from node latest-worker2 pod downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159 container dapi-container: STEP: delete the pod Feb 4 14:05:02.423: INFO: Waiting for pod downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159 to disappear Feb 4 14:05:02.432: INFO: Pod downward-api-e69c8971-4e59-4f1a-885a-a74962bc8159 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:05:02.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2824" for this suite. • [SLOW TEST:6.279 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":311,"completed":220,"skipped":3827,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:05:02.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 4 14:05:02.557: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:05:51.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2678" for this suite. • [SLOW TEST:48.695 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":311,"completed":221,"skipped":3847,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:05:51.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Feb 4 14:05:51.282: INFO: Waiting up to 1m0s for all nodes to be ready Feb 4 14:06:51.311: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:06:51.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Feb 4 14:06:55.446: INFO: found a healthy node: latest-worker [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:07:11.599: INFO: pods created so far: [1 1 1] Feb 4 14:07:11.599: INFO: length of pods created so far: 3 Feb 4 14:07:51.611: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:07:58.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-9388" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:07:58.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4255" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:127.677 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451 runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":311,"completed":222,"skipped":3853,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:07:58.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:07:58.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7" in namespace "projected-5992" to be "Succeeded or Failed" Feb 4 14:07:58.931: INFO: Pod "downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.771021ms Feb 4 14:08:01.078: INFO: Pod "downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164582547s Feb 4 14:08:03.083: INFO: Pod "downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.169215147s STEP: Saw pod success Feb 4 14:08:03.083: INFO: Pod "downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7" satisfied condition "Succeeded or Failed" Feb 4 14:08:03.086: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7 container client-container: STEP: delete the pod Feb 4 14:08:03.309: INFO: Waiting for pod downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7 to disappear Feb 4 14:08:03.413: INFO: Pod downwardapi-volume-81521907-2372-4a80-968c-6ec389e722b7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:08:03.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5992" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":223,"skipped":3873,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:08:03.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test override command Feb 4 14:08:03.567: INFO: Waiting up to 5m0s for pod "client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e" in namespace "containers-8632" to be "Succeeded or Failed" Feb 4 14:08:03.578: INFO: Pod "client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664162ms Feb 4 14:08:05.814: INFO: Pod "client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246790658s Feb 4 14:08:07.938: INFO: Pod "client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370658347s Feb 4 14:08:09.941: INFO: Pod "client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.373609742s STEP: Saw pod success Feb 4 14:08:09.941: INFO: Pod "client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e" satisfied condition "Succeeded or Failed" Feb 4 14:08:09.943: INFO: Trying to get logs from node latest-worker2 pod client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e container agnhost-container: STEP: delete the pod Feb 4 14:08:09.986: INFO: Waiting for pod client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e to disappear Feb 4 14:08:09.994: INFO: Pod client-containers-4594cc7d-e7e8-41f8-86b6-bb9c3b97269e no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:08:09.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8632" for this suite. • [SLOW TEST:6.580 seconds] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":311,"completed":224,"skipped":3890,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:08:10.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:08:10.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902" in namespace "downward-api-3708" to be "Succeeded or Failed" Feb 4 14:08:10.103: INFO: Pod "downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902": Phase="Pending", Reason="", readiness=false. Elapsed: 30.456801ms Feb 4 14:08:12.108: INFO: Pod "downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035667839s Feb 4 14:08:14.113: INFO: Pod "downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040391559s STEP: Saw pod success Feb 4 14:08:14.113: INFO: Pod "downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902" satisfied condition "Succeeded or Failed" Feb 4 14:08:14.115: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902 container client-container: STEP: delete the pod Feb 4 14:08:14.267: INFO: Waiting for pod downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902 to disappear Feb 4 14:08:14.289: INFO: Pod downwardapi-volume-f0d287fe-6793-412c-beb3-c47aa461f902 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:08:14.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3708" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":311,"completed":225,"skipped":3904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:08:14.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a replication controller Feb 4 14:08:14.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 create -f -' Feb 4 14:08:14.792: INFO: stderr: "" Feb 4 14:08:14.792: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 4 14:08:14.792: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 14:08:14.970: INFO: stderr: "" Feb 4 14:08:14.970: INFO: stdout: "update-demo-nautilus-9ztgr update-demo-nautilus-d6sp8 " Feb 4 14:08:14.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods update-demo-nautilus-9ztgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 14:08:15.091: INFO: stderr: "" Feb 4 14:08:15.091: INFO: stdout: "" Feb 4 14:08:15.091: INFO: update-demo-nautilus-9ztgr is created but not running Feb 4 14:08:20.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Feb 4 14:08:20.200: INFO: stderr: "" Feb 4 14:08:20.200: INFO: stdout: "update-demo-nautilus-9ztgr update-demo-nautilus-d6sp8 " Feb 4 14:08:20.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods update-demo-nautilus-9ztgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 14:08:20.300: INFO: stderr: "" Feb 4 14:08:20.300: INFO: stdout: "true" Feb 4 14:08:20.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods update-demo-nautilus-9ztgr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 14:08:20.413: INFO: stderr: "" Feb 4 14:08:20.413: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 14:08:20.413: INFO: validating pod update-demo-nautilus-9ztgr Feb 4 14:08:20.417: INFO: got data: { "image": "nautilus.jpg" } Feb 4 14:08:20.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 14:08:20.417: INFO: update-demo-nautilus-9ztgr is verified up and running Feb 4 14:08:20.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods update-demo-nautilus-d6sp8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Feb 4 14:08:20.517: INFO: stderr: "" Feb 4 14:08:20.517: INFO: stdout: "true" Feb 4 14:08:20.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods update-demo-nautilus-d6sp8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Feb 4 14:08:20.618: INFO: stderr: "" Feb 4 14:08:20.618: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Feb 4 14:08:20.618: INFO: validating pod update-demo-nautilus-d6sp8 Feb 4 14:08:20.622: INFO: got data: { "image": "nautilus.jpg" } Feb 4 14:08:20.622: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 4 14:08:20.622: INFO: update-demo-nautilus-d6sp8 is verified up and running STEP: using delete to clean up resources Feb 4 14:08:20.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 delete --grace-period=0 --force -f -' Feb 4 14:08:20.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 14:08:20.715: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 4 14:08:20.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get rc,svc -l name=update-demo --no-headers' Feb 4 14:08:20.816: INFO: stderr: "No resources found in kubectl-4225 namespace.\n" Feb 4 14:08:20.816: INFO: stdout: "" Feb 4 14:08:20.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 4 14:08:20.922: INFO: stderr: "" Feb 4 14:08:20.922: INFO: stdout: "update-demo-nautilus-9ztgr\nupdate-demo-nautilus-d6sp8\n" Feb 4 14:08:21.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get rc,svc -l name=update-demo --no-headers' Feb 4 14:08:21.647: INFO: stderr: "No resources found in kubectl-4225 namespace.\n" Feb 4 14:08:21.647: INFO: stdout: "" Feb 4 14:08:21.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-4225 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 4 14:08:21.780: INFO: stderr: "" Feb 4 14:08:21.780: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:08:21.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4225" for this suite. • [SLOW TEST:7.444 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:297 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":311,"completed":226,"skipped":3945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:08:21.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8055 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating statefulset ss in namespace statefulset-8055 Feb 4 14:08:22.342: INFO: Found 0 stateful pods, waiting for 1 Feb 4 14:08:32.347: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 4 14:08:32.420: INFO: Deleting all statefulset in ns statefulset-8055 Feb 4 14:08:32.436: INFO: Scaling statefulset ss to 0 Feb 4 14:09:42.519: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 14:09:42.521: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:09:42.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8055" for this suite. • [SLOW TEST:80.761 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":311,"completed":227,"skipped":3975,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:09:42.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-714.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-714.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-714.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-714.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 14:09:50.765: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.768: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.772: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.775: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.784: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.787: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.790: INFO: Unable to read jessie_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.793: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:50.799: INFO: Lookups using dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local] Feb 4 14:09:55.805: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.809: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.811: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.814: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.821: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.824: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.827: INFO: Unable to read jessie_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.830: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:09:55.837: INFO: Lookups using dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local] Feb 4 14:10:00.803: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.807: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.810: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.812: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.821: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.823: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.826: INFO: Unable to read jessie_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.829: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:00.835: INFO: Lookups using dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local] Feb 4 14:10:05.804: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.808: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.812: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.815: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.825: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.828: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.831: INFO: Unable to read jessie_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.834: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:05.839: INFO: Lookups using dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local] Feb 4 14:10:10.804: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.807: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.810: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.814: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.824: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.827: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.830: INFO: Unable to read jessie_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.833: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:10.839: INFO: Lookups using dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local] Feb 4 14:10:15.823: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.826: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.829: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.833: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.842: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.845: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.848: INFO: Unable to read jessie_udp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.852: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local from pod dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176: the server could not find the requested resource (get pods dns-test-d311f4de-000d-4962-9e77-2ed882e54176) Feb 4 14:10:15.858: INFO: Lookups using dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local wheezy_udp@dns-test-service-2.dns-714.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-714.svc.cluster.local jessie_udp@dns-test-service-2.dns-714.svc.cluster.local jessie_tcp@dns-test-service-2.dns-714.svc.cluster.local] Feb 4 14:10:20.835: INFO: DNS probes using dns-714/dns-test-d311f4de-000d-4962-9e77-2ed882e54176 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:10:21.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-714" for this suite. • [SLOW TEST:39.054 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":311,"completed":228,"skipped":3988,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:10:21.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:10:25.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-784" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":229,"skipped":3992,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:10:25.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: validating api versions Feb 4 14:10:25.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3952 api-versions' Feb 4 14:10:26.026: INFO: stderr: "" Feb 4 14:10:26.026: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:10:26.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3952" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":311,"completed":230,"skipped":4000,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:10:26.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 4 14:10:34.258: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:34.274: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:36.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:36.277: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:38.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:38.284: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:40.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:40.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:42.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:42.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:44.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:44.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:46.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:46.284: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:48.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:48.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:50.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:50.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:52.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:52.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:54.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:54.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:56.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:56.278: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:10:58.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:10:58.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:00.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:00.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:02.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:02.278: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:04.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:04.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:06.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:06.297: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:08.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:08.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:10.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:10.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:12.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:12.285: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:14.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:14.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:16.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:16.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:18.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:18.278: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:20.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:20.278: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:22.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:22.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:24.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:24.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:26.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:26.303: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:28.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:28.278: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:30.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:30.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:32.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:32.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:34.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:34.278: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:36.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:36.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:38.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:38.279: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:40.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:40.280: INFO: Pod pod-with-poststart-http-hook still exists Feb 4 14:11:42.274: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 4 14:11:42.279: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:11:42.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3922" for this suite. • [SLOW TEST:76.252 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":311,"completed":231,"skipped":4018,"failed":0} [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:11:42.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:11:42.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2" in namespace "projected-4467" to be "Succeeded or Failed" Feb 4 14:11:42.549: INFO: Pod "downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2": Phase="Pending", Reason="", readiness=false. Elapsed: 87.267852ms Feb 4 14:11:44.554: INFO: Pod "downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092555683s Feb 4 14:11:46.560: INFO: Pod "downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097868864s STEP: Saw pod success Feb 4 14:11:46.560: INFO: Pod "downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2" satisfied condition "Succeeded or Failed" Feb 4 14:11:46.563: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2 container client-container: STEP: delete the pod Feb 4 14:11:46.795: INFO: Waiting for pod downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2 to disappear Feb 4 14:11:46.801: INFO: Pod downwardapi-volume-9e80c0c7-cbde-4a81-8e97-496f754d41b2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:11:46.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4467" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":311,"completed":232,"skipped":4018,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:11:46.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:12:03.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4414" for this suite. • [SLOW TEST:16.292 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":311,"completed":233,"skipped":4021,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:12:03.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating pod pod-subpath-test-projected-r4v4 STEP: Creating a pod to test atomic-volume-subpath Feb 4 14:12:03.222: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-r4v4" in namespace "subpath-8404" to be "Succeeded or Failed" Feb 4 14:12:03.233: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.586893ms Feb 4 14:12:05.245: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02302267s Feb 4 14:12:07.249: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 4.027797478s Feb 4 14:12:09.253: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 6.031701616s Feb 4 14:12:11.259: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 8.03690101s Feb 4 14:12:13.264: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 10.042034493s Feb 4 14:12:15.269: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 12.04773011s Feb 4 14:12:17.275: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 14.053010725s Feb 4 14:12:19.279: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 16.057257568s Feb 4 14:12:21.284: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 18.062458308s Feb 4 14:12:23.289: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 20.066809444s Feb 4 14:12:25.293: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 22.071339136s Feb 4 14:12:27.298: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Running", Reason="", readiness=true. Elapsed: 24.076004163s Feb 4 14:12:29.302: INFO: Pod "pod-subpath-test-projected-r4v4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.080496337s STEP: Saw pod success Feb 4 14:12:29.302: INFO: Pod "pod-subpath-test-projected-r4v4" satisfied condition "Succeeded or Failed" Feb 4 14:12:29.305: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-r4v4 container test-container-subpath-projected-r4v4: STEP: delete the pod Feb 4 14:12:29.365: INFO: Waiting for pod pod-subpath-test-projected-r4v4 to disappear Feb 4 14:12:29.375: INFO: Pod pod-subpath-test-projected-r4v4 no longer exists STEP: Deleting pod pod-subpath-test-projected-r4v4 Feb 4 14:12:29.375: INFO: Deleting pod "pod-subpath-test-projected-r4v4" in namespace "subpath-8404" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:12:29.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8404" for this suite. • [SLOW TEST:26.284 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":311,"completed":234,"skipped":4025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:12:29.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 14:12:30.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 14:12:32.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:12:34.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748044750, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:12:37.197: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:12:38.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-136" for this suite. STEP: Destroying namespace "webhook-136-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:8.843 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":311,"completed":235,"skipped":4054,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:12:38.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2sncj in namespace proxy-2637 I0204 14:12:38.492515 7 runners.go:190] Created replication controller with name: proxy-service-2sncj, namespace: proxy-2637, replica count: 1 I0204 14:12:39.542902 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 14:12:40.543148 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 14:12:41.543422 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0204 14:12:42.543738 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0204 14:12:43.544033 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0204 14:12:44.544304 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0204 14:12:45.544549 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0204 14:12:46.544768 7 runners.go:190] proxy-service-2sncj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 14:12:46.549: INFO: setup took 8.103984975s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 4 14:12:46.555: INFO: (0) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 6.198419ms) Feb 4 14:12:46.555: INFO: (0) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 6.034201ms) Feb 4 14:12:46.556: INFO: (0) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 6.483309ms) Feb 4 14:12:46.556: INFO: (0) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 6.706421ms) Feb 4 14:12:46.557: INFO: (0) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 7.478402ms) Feb 4 14:12:46.557: INFO: (0) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 7.747412ms) Feb 4 14:12:46.557: INFO: (0) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 7.889863ms) Feb 4 14:12:46.558: INFO: (0) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 8.278859ms) Feb 4 14:12:46.558: INFO: (0) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 8.643332ms) Feb 4 14:12:46.558: INFO: (0) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 8.733094ms) Feb 4 14:12:46.559: INFO: (0) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 9.399157ms) Feb 4 14:12:46.565: INFO: (0) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 15.672785ms) Feb 4 14:12:46.565: INFO: (0) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 15.703513ms) Feb 4 14:12:46.567: INFO: (0) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 17.440574ms) Feb 4 14:12:46.567: INFO: (0) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 17.49429ms) Feb 4 14:12:46.567: INFO: (0) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test (200; 3.87189ms) Feb 4 14:12:46.571: INFO: (1) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: ... (200; 4.284779ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.378098ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 4.352707ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.428729ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.378619ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 4.481346ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 4.757537ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.752801ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 4.942788ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 4.976939ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 5.066118ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 5.089945ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 5.067978ms) Feb 4 14:12:46.572: INFO: (1) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 5.139057ms) Feb 4 14:12:46.575: INFO: (2) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 2.597747ms) Feb 4 14:12:46.576: INFO: (2) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 5.12031ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 5.135789ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 5.225003ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 5.560668ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 5.639355ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 5.644813ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 5.636104ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 5.875519ms) Feb 4 14:12:46.578: INFO: (2) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 5.901198ms) Feb 4 14:12:46.581: INFO: (3) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 3.22809ms) Feb 4 14:12:46.583: INFO: (3) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 4.243195ms) Feb 4 14:12:46.583: INFO: (3) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.320754ms) Feb 4 14:12:46.583: INFO: (3) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 4.510237ms) Feb 4 14:12:46.583: INFO: (3) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 4.588258ms) Feb 4 14:12:46.583: INFO: (3) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.750003ms) Feb 4 14:12:46.583: INFO: (3) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 4.849012ms) Feb 4 14:12:46.583: INFO: (3) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.861404ms) Feb 4 14:12:46.584: INFO: (3) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 5.067762ms) Feb 4 14:12:46.584: INFO: (3) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 5.079173ms) Feb 4 14:12:46.584: INFO: (3) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 5.116637ms) Feb 4 14:12:46.584: INFO: (3) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 5.136239ms) Feb 4 14:12:46.584: INFO: (3) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 5.136183ms) Feb 4 14:12:46.584: INFO: (3) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 5.239079ms) Feb 4 14:12:46.587: INFO: (4) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 3.208355ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 4.682144ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 4.670714ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 4.655483ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 4.727965ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.671068ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 5.023555ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 5.005371ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 5.02516ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 5.155486ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 5.175007ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 5.287308ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 5.361825ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 5.439294ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 5.404484ms) Feb 4 14:12:46.589: INFO: (4) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: ... (200; 4.221473ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.279307ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 4.31866ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 4.375773ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.930619ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.85319ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 4.99664ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 4.929517ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 5.06602ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 5.03928ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 5.043164ms) Feb 4 14:12:46.594: INFO: (5) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 4.983476ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 3.955929ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.210517ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.214874ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.204025ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 4.21326ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 4.2309ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 4.261959ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 4.255362ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 4.259923ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 4.247604ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 4.358738ms) Feb 4 14:12:46.599: INFO: (6) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 5.975634ms) Feb 4 14:12:46.606: INFO: (7) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 5.932958ms) Feb 4 14:12:46.606: INFO: (7) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 5.970442ms) Feb 4 14:12:46.606: INFO: (7) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 7.559765ms) Feb 4 14:12:46.614: INFO: (8) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 7.674387ms) Feb 4 14:12:46.615: INFO: (8) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 7.997253ms) Feb 4 14:12:46.615: INFO: (8) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 8.063542ms) Feb 4 14:12:46.615: INFO: (8) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 8.157343ms) Feb 4 14:12:46.615: INFO: (8) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 8.106407ms) Feb 4 14:12:46.615: INFO: (8) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 8.113148ms) Feb 4 14:12:46.615: INFO: (8) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 8.096821ms) Feb 4 14:12:46.615: INFO: (8) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 8.185437ms) Feb 4 14:12:46.625: INFO: (9) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 9.839568ms) Feb 4 14:12:46.625: INFO: (9) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 9.922532ms) Feb 4 14:12:46.625: INFO: (9) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 10.060798ms) Feb 4 14:12:46.625: INFO: (9) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 10.472989ms) Feb 4 14:12:46.625: INFO: (9) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 10.444121ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 11.343784ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 11.408272ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 11.380473ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 11.43651ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 11.424448ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 11.411669ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 11.493397ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 11.40035ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 11.464076ms) Feb 4 14:12:46.626: INFO: (9) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 4.755006ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 5.039147ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 5.186422ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: ... (200; 5.677214ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 5.623643ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 5.644655ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 5.838079ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 5.925116ms) Feb 4 14:12:46.632: INFO: (10) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 5.941737ms) Feb 4 14:12:46.636: INFO: (11) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 3.251534ms) Feb 4 14:12:46.636: INFO: (11) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 3.289117ms) Feb 4 14:12:46.636: INFO: (11) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 3.285281ms) Feb 4 14:12:46.636: INFO: (11) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 3.276847ms) Feb 4 14:12:46.636: INFO: (11) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 3.35006ms) Feb 4 14:12:46.636: INFO: (11) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 3.368724ms) Feb 4 14:12:46.637: INFO: (11) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 4.604527ms) Feb 4 14:12:46.637: INFO: (11) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 4.616845ms) Feb 4 14:12:46.637: INFO: (11) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 4.822887ms) Feb 4 14:12:46.637: INFO: (11) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 4.85503ms) Feb 4 14:12:46.637: INFO: (11) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 4.910851ms) Feb 4 14:12:46.637: INFO: (11) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.880919ms) Feb 4 14:12:46.637: INFO: (11) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 3.455404ms) Feb 4 14:12:46.642: INFO: (12) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.211157ms) Feb 4 14:12:46.642: INFO: (12) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.223103ms) Feb 4 14:12:46.642: INFO: (12) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 4.249112ms) Feb 4 14:12:46.642: INFO: (12) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 4.29288ms) Feb 4 14:12:46.642: INFO: (12) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.325234ms) Feb 4 14:12:46.642: INFO: (12) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 4.457706ms) Feb 4 14:12:46.642: INFO: (12) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: ... (200; 4.169432ms) Feb 4 14:12:46.648: INFO: (13) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.499675ms) Feb 4 14:12:46.648: INFO: (13) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.524389ms) Feb 4 14:12:46.648: INFO: (13) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.574638ms) Feb 4 14:12:46.648: INFO: (13) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.890322ms) Feb 4 14:12:46.648: INFO: (13) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 4.761943ms) Feb 4 14:12:46.648: INFO: (13) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 5.074474ms) Feb 4 14:12:46.648: INFO: (13) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 5.065083ms) Feb 4 14:12:46.649: INFO: (13) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 5.313607ms) Feb 4 14:12:46.649: INFO: (13) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 5.383386ms) Feb 4 14:12:46.649: INFO: (13) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test (200; 3.905165ms) Feb 4 14:12:46.654: INFO: (14) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 3.971192ms) Feb 4 14:12:46.654: INFO: (14) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 4.117214ms) Feb 4 14:12:46.654: INFO: (14) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 4.119631ms) Feb 4 14:12:46.654: INFO: (14) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: ... (200; 4.746586ms) Feb 4 14:12:46.654: INFO: (14) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.90189ms) Feb 4 14:12:46.654: INFO: (14) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.910423ms) Feb 4 14:12:46.654: INFO: (14) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 4.945929ms) Feb 4 14:12:46.658: INFO: (15) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 3.615279ms) Feb 4 14:12:46.658: INFO: (15) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 3.8852ms) Feb 4 14:12:46.658: INFO: (15) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 3.90068ms) Feb 4 14:12:46.659: INFO: (15) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.268346ms) Feb 4 14:12:46.659: INFO: (15) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 4.197057ms) Feb 4 14:12:46.659: INFO: (15) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.328219ms) Feb 4 14:12:46.659: INFO: (15) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 4.326857ms) Feb 4 14:12:46.659: INFO: (15) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 4.156748ms) Feb 4 14:12:46.659: INFO: (15) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.263423ms) Feb 4 14:12:46.660: INFO: (15) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 5.12614ms) Feb 4 14:12:46.660: INFO: (15) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 5.11763ms) Feb 4 14:12:46.660: INFO: (15) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 5.183031ms) Feb 4 14:12:46.660: INFO: (15) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 5.292331ms) Feb 4 14:12:46.660: INFO: (15) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 5.40155ms) Feb 4 14:12:46.660: INFO: (15) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 5.474587ms) Feb 4 14:12:46.663: INFO: (16) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test (200; 6.138727ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 6.129137ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 6.141834ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 6.158986ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 6.155772ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 6.187915ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 6.171896ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 6.271066ms) Feb 4 14:12:46.666: INFO: (16) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 6.251759ms) Feb 4 14:12:46.669: INFO: (17) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 2.231677ms) Feb 4 14:12:46.669: INFO: (17) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 2.2372ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 3.490647ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 3.502764ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 3.79378ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 3.825468ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: test<... (200; 3.925881ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 3.919385ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.070475ms) Feb 4 14:12:46.670: INFO: (17) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 4.028995ms) Feb 4 14:12:46.673: INFO: (17) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 6.445505ms) Feb 4 14:12:46.675: INFO: (18) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 1.70789ms) Feb 4 14:12:46.675: INFO: (18) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: ... (200; 3.282004ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 3.676344ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname1/proxy/: tls baz (200; 3.818248ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname1/proxy/: foo (200; 4.097232ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/services/http:proxy-service-2sncj:portname2/proxy/: bar (200; 4.227291ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname2/proxy/: bar (200; 4.348293ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 4.526523ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:460/proxy/: tls baz (200; 4.582223ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 4.500642ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 4.57583ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/services/proxy-service-2sncj:portname1/proxy/: foo (200; 4.534012ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/services/https:proxy-service-2sncj:tlsportname2/proxy/: tls qux (200; 4.615326ms) Feb 4 14:12:46.677: INFO: (18) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 4.568076ms) Feb 4 14:12:46.678: INFO: (18) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 4.58711ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:1080/proxy/: ... (200; 2.988704ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 3.131644ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:1080/proxy/: test<... (200; 3.136019ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 3.212874ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:162/proxy/: bar (200; 3.193171ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:462/proxy/: tls qux (200; 3.236344ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/http:proxy-service-2sncj-h4s8q:160/proxy/: foo (200; 3.266707ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/proxy-service-2sncj-h4s8q/proxy/: test (200; 3.19672ms) Feb 4 14:12:46.681: INFO: (19) /api/v1/namespaces/proxy-2637/pods/https:proxy-service-2sncj-h4s8q:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 4 14:12:56.467: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:12:56.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2256" for this suite. • [SLOW TEST:5.297 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":311,"completed":237,"skipped":4089,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:12:56.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: set up a multi version CRD Feb 4 14:12:56.651: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:12.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1630" for this suite. • [SLOW TEST:16.104 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":311,"completed":238,"skipped":4102,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:12.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:13:12.781: INFO: Waiting up to 5m0s for pod "downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec" in namespace "downward-api-5205" to be "Succeeded or Failed" Feb 4 14:13:12.784: INFO: Pod "downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66375ms Feb 4 14:13:14.790: INFO: Pod "downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008792818s Feb 4 14:13:16.801: INFO: Pod "downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019458019s STEP: Saw pod success Feb 4 14:13:16.801: INFO: Pod "downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec" satisfied condition "Succeeded or Failed" Feb 4 14:13:16.803: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec container client-container: STEP: delete the pod Feb 4 14:13:16.885: INFO: Waiting for pod downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec to disappear Feb 4 14:13:16.892: INFO: Pod downwardapi-volume-432f3cff-77d2-42f3-b864-29e062a9cdec no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:16.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5205" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":239,"skipped":4122,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:16.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:13:17.572: INFO: Checking APIGroup: apiregistration.k8s.io Feb 4 14:13:17.573: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Feb 4 14:13:17.573: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.573: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Feb 4 14:13:17.573: INFO: Checking APIGroup: apps Feb 4 14:13:17.575: INFO: PreferredVersion.GroupVersion: apps/v1 Feb 4 14:13:17.575: INFO: Versions found [{apps/v1 v1}] Feb 4 14:13:17.575: INFO: apps/v1 matches apps/v1 Feb 4 14:13:17.575: INFO: Checking APIGroup: events.k8s.io Feb 4 14:13:17.576: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Feb 4 14:13:17.576: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.576: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Feb 4 14:13:17.576: INFO: Checking APIGroup: authentication.k8s.io Feb 4 14:13:17.577: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Feb 4 14:13:17.577: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.577: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Feb 4 14:13:17.577: INFO: Checking APIGroup: authorization.k8s.io Feb 4 14:13:17.578: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Feb 4 14:13:17.578: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.578: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Feb 4 14:13:17.578: INFO: Checking APIGroup: autoscaling Feb 4 14:13:17.579: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Feb 4 14:13:17.579: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Feb 4 14:13:17.579: INFO: autoscaling/v1 matches autoscaling/v1 Feb 4 14:13:17.579: INFO: Checking APIGroup: batch Feb 4 14:13:17.580: INFO: PreferredVersion.GroupVersion: batch/v1 Feb 4 14:13:17.580: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Feb 4 14:13:17.580: INFO: batch/v1 matches batch/v1 Feb 4 14:13:17.580: INFO: Checking APIGroup: certificates.k8s.io Feb 4 14:13:17.581: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Feb 4 14:13:17.581: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.581: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Feb 4 14:13:17.581: INFO: Checking APIGroup: networking.k8s.io Feb 4 14:13:17.582: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Feb 4 14:13:17.582: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.582: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Feb 4 14:13:17.582: INFO: Checking APIGroup: extensions Feb 4 14:13:17.583: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Feb 4 14:13:17.583: INFO: Versions found [{extensions/v1beta1 v1beta1}] Feb 4 14:13:17.583: INFO: extensions/v1beta1 matches extensions/v1beta1 Feb 4 14:13:17.583: INFO: Checking APIGroup: policy Feb 4 14:13:17.583: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Feb 4 14:13:17.583: INFO: Versions found [{policy/v1beta1 v1beta1}] Feb 4 14:13:17.583: INFO: policy/v1beta1 matches policy/v1beta1 Feb 4 14:13:17.583: INFO: Checking APIGroup: rbac.authorization.k8s.io Feb 4 14:13:17.584: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Feb 4 14:13:17.584: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.584: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Feb 4 14:13:17.584: INFO: Checking APIGroup: storage.k8s.io Feb 4 14:13:17.585: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Feb 4 14:13:17.585: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.585: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Feb 4 14:13:17.585: INFO: Checking APIGroup: admissionregistration.k8s.io Feb 4 14:13:17.586: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Feb 4 14:13:17.586: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.586: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Feb 4 14:13:17.586: INFO: Checking APIGroup: apiextensions.k8s.io Feb 4 14:13:17.587: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Feb 4 14:13:17.587: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.587: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Feb 4 14:13:17.587: INFO: Checking APIGroup: scheduling.k8s.io Feb 4 14:13:17.587: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Feb 4 14:13:17.587: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.587: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Feb 4 14:13:17.587: INFO: Checking APIGroup: coordination.k8s.io Feb 4 14:13:17.588: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Feb 4 14:13:17.588: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.588: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Feb 4 14:13:17.588: INFO: Checking APIGroup: node.k8s.io Feb 4 14:13:17.589: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Feb 4 14:13:17.589: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.589: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Feb 4 14:13:17.589: INFO: Checking APIGroup: discovery.k8s.io Feb 4 14:13:17.589: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Feb 4 14:13:17.589: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.589: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Feb 4 14:13:17.589: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Feb 4 14:13:17.590: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Feb 4 14:13:17.590: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Feb 4 14:13:17.590: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Feb 4 14:13:17.590: INFO: Checking APIGroup: pingcap.com Feb 4 14:13:17.590: INFO: PreferredVersion.GroupVersion: pingcap.com/v1alpha1 Feb 4 14:13:17.590: INFO: Versions found [{pingcap.com/v1alpha1 v1alpha1}] Feb 4 14:13:17.590: INFO: pingcap.com/v1alpha1 matches pingcap.com/v1alpha1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:17.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-7886" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":311,"completed":240,"skipped":4128,"failed":0} S ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:17.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:33.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-67" for this suite. • [SLOW TEST:16.108 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":311,"completed":241,"skipped":4129,"failed":0} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:33.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating service multi-endpoint-test in namespace services-9829 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9829 to expose endpoints map[] Feb 4 14:13:34.237: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Feb 4 14:13:35.244: INFO: successfully validated that service multi-endpoint-test in namespace services-9829 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9829 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9829 to expose endpoints map[pod1:[100]] Feb 4 14:13:39.357: INFO: successfully validated that service multi-endpoint-test in namespace services-9829 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-9829 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9829 to expose endpoints map[pod1:[100] pod2:[101]] Feb 4 14:13:43.451: INFO: successfully validated that service multi-endpoint-test in namespace services-9829 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-9829 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9829 to expose endpoints map[pod2:[101]] Feb 4 14:13:43.873: INFO: successfully validated that service multi-endpoint-test in namespace services-9829 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-9829 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9829 to expose endpoints map[] Feb 4 14:13:44.189: INFO: successfully validated that service multi-endpoint-test in namespace services-9829 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:44.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9829" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:10.858 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":311,"completed":242,"skipped":4132,"failed":0} SSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:44.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:44.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2620" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":311,"completed":243,"skipped":4138,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:44.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:13:45.167: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-eb3f82c3-9d7c-4cd2-882a-5c2d5f354b05" in namespace "security-context-test-7711" to be "Succeeded or Failed" Feb 4 14:13:45.171: INFO: Pod "busybox-readonly-false-eb3f82c3-9d7c-4cd2-882a-5c2d5f354b05": Phase="Pending", Reason="", readiness=false. Elapsed: 3.750538ms Feb 4 14:13:47.185: INFO: Pod "busybox-readonly-false-eb3f82c3-9d7c-4cd2-882a-5c2d5f354b05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017171185s Feb 4 14:13:49.188: INFO: Pod "busybox-readonly-false-eb3f82c3-9d7c-4cd2-882a-5c2d5f354b05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020914354s Feb 4 14:13:49.188: INFO: Pod "busybox-readonly-false-eb3f82c3-9d7c-4cd2-882a-5c2d5f354b05" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:49.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7711" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":311,"completed":244,"skipped":4142,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:49.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:13:49.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f" in namespace "downward-api-40" to be "Succeeded or Failed" Feb 4 14:13:49.346: INFO: Pod "downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.996619ms Feb 4 14:13:51.473: INFO: Pod "downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144063548s Feb 4 14:13:53.478: INFO: Pod "downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148945977s STEP: Saw pod success Feb 4 14:13:53.478: INFO: Pod "downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f" satisfied condition "Succeeded or Failed" Feb 4 14:13:53.481: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f container client-container: STEP: delete the pod Feb 4 14:13:53.553: INFO: Waiting for pod downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f to disappear Feb 4 14:13:53.572: INFO: Pod downwardapi-volume-c0ac8a48-59a9-434a-8232-b30c84946c7f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:53.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-40" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":311,"completed":245,"skipped":4144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:53.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:13:53.672: INFO: Waiting up to 5m0s for pod "busybox-user-65534-eea6e4c1-8b0d-47b7-8387-e18d38730e1f" in namespace "security-context-test-8161" to be "Succeeded or Failed" Feb 4 14:13:53.680: INFO: Pod "busybox-user-65534-eea6e4c1-8b0d-47b7-8387-e18d38730e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.600912ms Feb 4 14:13:55.685: INFO: Pod "busybox-user-65534-eea6e4c1-8b0d-47b7-8387-e18d38730e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012680448s Feb 4 14:13:57.689: INFO: Pod "busybox-user-65534-eea6e4c1-8b0d-47b7-8387-e18d38730e1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016849912s Feb 4 14:13:57.689: INFO: Pod "busybox-user-65534-eea6e4c1-8b0d-47b7-8387-e18d38730e1f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:13:57.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8161" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":246,"skipped":4179,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:13:57.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 4 14:13:57.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-8388 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Feb 4 14:14:01.355: INFO: stderr: "" Feb 4 14:14:01.355: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Feb 4 14:14:01.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-8388 delete pods e2e-test-httpd-pod' Feb 4 14:14:11.083: INFO: stderr: "" Feb 4 14:14:11.083: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:14:11.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8388" for this suite. • [SLOW TEST:13.393 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":311,"completed":247,"skipped":4186,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:14:11.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-ebf43f76-0260-427c-b6e7-ca3f45944973 STEP: Creating a pod to test consume configMaps Feb 4 14:14:11.199: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626" in namespace "projected-6177" to be "Succeeded or Failed" Feb 4 14:14:11.219: INFO: Pod "pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626": Phase="Pending", Reason="", readiness=false. Elapsed: 20.831399ms Feb 4 14:14:13.243: INFO: Pod "pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044617733s Feb 4 14:14:15.263: INFO: Pod "pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064363235s STEP: Saw pod success Feb 4 14:14:15.263: INFO: Pod "pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626" satisfied condition "Succeeded or Failed" Feb 4 14:14:15.266: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626 container projected-configmap-volume-test: STEP: delete the pod Feb 4 14:14:15.300: INFO: Waiting for pod pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626 to disappear Feb 4 14:14:15.309: INFO: Pod pod-projected-configmaps-c547976f-9ba3-4d72-91d6-0cde1fa23626 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:14:15.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6177" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":311,"completed":248,"skipped":4204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:14:15.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:14:19.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-82" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":249,"skipped":4231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:14:19.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 4 14:14:20.219: INFO: Pod name wrapped-volume-race-1fc4ff93-b626-434b-82d9-2afa0c8b8201: Found 0 pods out of 5 Feb 4 14:14:25.229: INFO: Pod name wrapped-volume-race-1fc4ff93-b626-434b-82d9-2afa0c8b8201: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1fc4ff93-b626-434b-82d9-2afa0c8b8201 in namespace emptydir-wrapper-9371, will wait for the garbage collector to delete the pods Feb 4 14:14:39.321: INFO: Deleting ReplicationController wrapped-volume-race-1fc4ff93-b626-434b-82d9-2afa0c8b8201 took: 11.65283ms Feb 4 14:14:39.921: INFO: Terminating ReplicationController wrapped-volume-race-1fc4ff93-b626-434b-82d9-2afa0c8b8201 pods took: 600.279624ms STEP: Creating RC which spawns configmap-volume pods Feb 4 14:15:51.296: INFO: Pod name wrapped-volume-race-cf245980-6a6e-4f97-b3b4-17ccb2663a3d: Found 0 pods out of 5 Feb 4 14:15:56.305: INFO: Pod name wrapped-volume-race-cf245980-6a6e-4f97-b3b4-17ccb2663a3d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cf245980-6a6e-4f97-b3b4-17ccb2663a3d in namespace emptydir-wrapper-9371, will wait for the garbage collector to delete the pods Feb 4 14:16:12.450: INFO: Deleting ReplicationController wrapped-volume-race-cf245980-6a6e-4f97-b3b4-17ccb2663a3d took: 69.053383ms Feb 4 14:16:13.051: INFO: Terminating ReplicationController wrapped-volume-race-cf245980-6a6e-4f97-b3b4-17ccb2663a3d pods took: 600.198398ms STEP: Creating RC which spawns configmap-volume pods Feb 4 14:16:51.445: INFO: Pod name wrapped-volume-race-ed254b86-e5b7-4a3d-a8f2-64b1ece67581: Found 0 pods out of 5 Feb 4 14:16:56.455: INFO: Pod name wrapped-volume-race-ed254b86-e5b7-4a3d-a8f2-64b1ece67581: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ed254b86-e5b7-4a3d-a8f2-64b1ece67581 in namespace emptydir-wrapper-9371, will wait for the garbage collector to delete the pods Feb 4 14:17:12.544: INFO: Deleting ReplicationController wrapped-volume-race-ed254b86-e5b7-4a3d-a8f2-64b1ece67581 took: 15.483522ms Feb 4 14:17:13.145: INFO: Terminating ReplicationController wrapped-volume-race-ed254b86-e5b7-4a3d-a8f2-64b1ece67581 pods took: 600.243743ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:17:41.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9371" for this suite. • [SLOW TEST:202.370 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":311,"completed":250,"skipped":4257,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:17:41.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:17:41.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f" in namespace "projected-3736" to be "Succeeded or Failed" Feb 4 14:17:41.967: INFO: Pod "downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.693632ms Feb 4 14:17:43.971: INFO: Pod "downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007665029s Feb 4 14:17:45.977: INFO: Pod "downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f": Phase="Running", Reason="", readiness=true. Elapsed: 4.013444578s Feb 4 14:17:47.993: INFO: Pod "downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029103758s STEP: Saw pod success Feb 4 14:17:47.993: INFO: Pod "downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f" satisfied condition "Succeeded or Failed" Feb 4 14:17:48.026: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f container client-container: STEP: delete the pod Feb 4 14:17:48.106: INFO: Waiting for pod downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f to disappear Feb 4 14:17:48.111: INFO: Pod downwardapi-volume-8f331bba-3620-4e89-b8a6-eb565e98108f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:17:48.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3736" for this suite. • [SLOW TEST:6.369 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":251,"skipped":4258,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:17:48.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name s-test-opt-del-72299d38-18db-4d52-9c41-668101ad83a2 STEP: Creating secret with name s-test-opt-upd-fa9c8e7a-ef69-46e1-9664-1227325da3e1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-72299d38-18db-4d52-9c41-668101ad83a2 STEP: Updating secret s-test-opt-upd-fa9c8e7a-ef69-46e1-9664-1227325da3e1 STEP: Creating secret with name s-test-opt-create-fd84e3f7-7b47-4810-8f67-97a8416164ee STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:19:04.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7212" for this suite. • [SLOW TEST:76.666 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":311,"completed":252,"skipped":4273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:19:04.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 4 14:19:08.953: INFO: &Pod{ObjectMeta:{send-events-097b2b27-4944-4db7-b15f-16a497347730 events-5578 7fb8ef00-cda5-414f-8c04-a8f81b650a3e 2109931 0 2021-02-04 14:19:04 +0000 UTC map[name:foo time:922875812] map[] [] [] [{e2e.test Update v1 2021-02-04 14:19:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 14:19:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.134\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dchgn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dchgn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dchgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:19:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:19:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:19:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:19:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.134,StartTime:2021-02-04 14:19:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 14:19:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e,ContainerID:containerd://c467bf509f5cae05eb03f6aaef88343868ae251fa2e9d8335912f8cad5d7e01f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.134,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Feb 4 14:19:10.957: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 4 14:19:12.964: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:19:12.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5578" for this suite. • [SLOW TEST:8.163 seconds] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":311,"completed":253,"skipped":4296,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:19:13.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:19:13.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2953" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":311,"completed":254,"skipped":4304,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:19:13.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 14:19:13.800: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 14:19:15.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045153, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045153, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045153, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045153, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:19:18.903: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:19:19.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1807" for this suite. STEP: Destroying namespace "webhook-1807-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.465 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":311,"completed":255,"skipped":4317,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:19:19.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:19:19.772: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:19:23.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3670" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":311,"completed":256,"skipped":4336,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:19:23.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9855 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a new StatefulSet Feb 4 14:19:24.073: INFO: Found 0 stateful pods, waiting for 3 Feb 4 14:19:34.078: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 14:19:34.078: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 14:19:34.078: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 4 14:19:44.079: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 4 14:19:44.079: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 4 14:19:44.079: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 4 14:19:44.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-9855 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 14:19:44.419: INFO: stderr: "I0204 14:19:44.232378 3640 log.go:181] (0xc00003a0b0) (0xc000b461e0) Create stream\nI0204 14:19:44.232445 3640 log.go:181] (0xc00003a0b0) (0xc000b461e0) Stream added, broadcasting: 1\nI0204 14:19:44.235875 3640 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0204 14:19:44.235952 3640 log.go:181] (0xc00003a0b0) (0xc000b46280) Create stream\nI0204 14:19:44.235979 3640 log.go:181] (0xc00003a0b0) (0xc000b46280) Stream added, broadcasting: 3\nI0204 14:19:44.238015 3640 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0204 14:19:44.238062 3640 log.go:181] (0xc00003a0b0) (0xc00091a280) Create stream\nI0204 14:19:44.238075 3640 log.go:181] (0xc00003a0b0) (0xc00091a280) Stream added, broadcasting: 5\nI0204 14:19:44.239332 3640 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0204 14:19:44.304581 3640 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 14:19:44.304608 3640 log.go:181] (0xc00091a280) (5) Data frame handling\nI0204 14:19:44.304624 3640 log.go:181] (0xc00091a280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 14:19:44.410602 3640 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0204 14:19:44.410646 3640 log.go:181] (0xc00091a280) (5) Data frame handling\nI0204 14:19:44.410717 3640 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 14:19:44.410734 3640 log.go:181] (0xc000b46280) (3) Data frame handling\nI0204 14:19:44.410747 3640 log.go:181] (0xc000b46280) (3) Data frame sent\nI0204 14:19:44.410757 3640 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0204 14:19:44.410764 3640 log.go:181] (0xc000b46280) (3) Data frame handling\nI0204 14:19:44.412703 3640 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0204 14:19:44.412743 3640 log.go:181] (0xc000b461e0) (1) Data frame handling\nI0204 14:19:44.412766 3640 log.go:181] (0xc000b461e0) (1) Data frame sent\nI0204 14:19:44.412793 3640 log.go:181] (0xc00003a0b0) (0xc000b461e0) Stream removed, broadcasting: 1\nI0204 14:19:44.412998 3640 log.go:181] (0xc00003a0b0) Go away received\nI0204 14:19:44.413454 3640 log.go:181] (0xc00003a0b0) (0xc000b461e0) Stream removed, broadcasting: 1\nI0204 14:19:44.413481 3640 log.go:181] (0xc00003a0b0) (0xc000b46280) Stream removed, broadcasting: 3\nI0204 14:19:44.413494 3640 log.go:181] (0xc00003a0b0) (0xc00091a280) Stream removed, broadcasting: 5\n" Feb 4 14:19:44.420: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 14:19:44.420: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 4 14:19:54.496: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 4 14:20:04.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-9855 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 14:20:04.829: INFO: stderr: "I0204 14:20:04.727075 3658 log.go:181] (0xc000141130) (0xc000a1a3c0) Create stream\nI0204 14:20:04.727164 3658 log.go:181] (0xc000141130) (0xc000a1a3c0) Stream added, broadcasting: 1\nI0204 14:20:04.728945 3658 log.go:181] (0xc000141130) Reply frame received for 1\nI0204 14:20:04.728982 3658 log.go:181] (0xc000141130) (0xc000a1a460) Create stream\nI0204 14:20:04.728991 3658 log.go:181] (0xc000141130) (0xc000a1a460) Stream added, broadcasting: 3\nI0204 14:20:04.729837 3658 log.go:181] (0xc000141130) Reply frame received for 3\nI0204 14:20:04.729884 3658 log.go:181] (0xc000141130) (0xc000624000) Create stream\nI0204 14:20:04.729897 3658 log.go:181] (0xc000141130) (0xc000624000) Stream added, broadcasting: 5\nI0204 14:20:04.730875 3658 log.go:181] (0xc000141130) Reply frame received for 5\nI0204 14:20:04.820413 3658 log.go:181] (0xc000141130) Data frame received for 3\nI0204 14:20:04.820456 3658 log.go:181] (0xc000a1a460) (3) Data frame handling\nI0204 14:20:04.820489 3658 log.go:181] (0xc000a1a460) (3) Data frame sent\nI0204 14:20:04.820506 3658 log.go:181] (0xc000141130) Data frame received for 3\nI0204 14:20:04.820525 3658 log.go:181] (0xc000a1a460) (3) Data frame handling\nI0204 14:20:04.820545 3658 log.go:181] (0xc000141130) Data frame received for 5\nI0204 14:20:04.820567 3658 log.go:181] (0xc000624000) (5) Data frame handling\nI0204 14:20:04.820600 3658 log.go:181] (0xc000624000) (5) Data frame sent\nI0204 14:20:04.820623 3658 log.go:181] (0xc000141130) Data frame received for 5\nI0204 14:20:04.820635 3658 log.go:181] (0xc000624000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0204 14:20:04.822358 3658 log.go:181] (0xc000141130) Data frame received for 1\nI0204 14:20:04.822389 3658 log.go:181] (0xc000a1a3c0) (1) Data frame handling\nI0204 14:20:04.822415 3658 log.go:181] (0xc000a1a3c0) (1) Data frame sent\nI0204 14:20:04.822487 3658 log.go:181] (0xc000141130) (0xc000a1a3c0) Stream removed, broadcasting: 1\nI0204 14:20:04.822518 3658 log.go:181] (0xc000141130) Go away received\nI0204 14:20:04.822964 3658 log.go:181] (0xc000141130) (0xc000a1a3c0) Stream removed, broadcasting: 1\nI0204 14:20:04.822991 3658 log.go:181] (0xc000141130) (0xc000a1a460) Stream removed, broadcasting: 3\nI0204 14:20:04.823004 3658 log.go:181] (0xc000141130) (0xc000624000) Stream removed, broadcasting: 5\n" Feb 4 14:20:04.829: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 14:20:04.829: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 14:20:14.851: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:20:14.851: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:20:14.851: INFO: Waiting for Pod statefulset-9855/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:20:24.859: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:20:24.859: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:20:24.859: INFO: Waiting for Pod statefulset-9855/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:20:34.859: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:20:34.859: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:20:34.859: INFO: Waiting for Pod statefulset-9855/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:20:44.859: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:20:44.859: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:20:54.858: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:20:54.858: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:21:04.858: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:21:04.858: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:21:14.858: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:21:14.859: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:21:24.859: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:21:24.859: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:21:34.859: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:21:34.859: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:21:44.859: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:21:44.859: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 4 14:21:54.858: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update STEP: Rolling back to a previous revision Feb 4 14:22:04.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-9855 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 4 14:22:05.277: INFO: stderr: "I0204 14:22:05.122636 3676 log.go:181] (0xc00014e370) (0xc00019ac80) Create stream\nI0204 14:22:05.122705 3676 log.go:181] (0xc00014e370) (0xc00019ac80) Stream added, broadcasting: 1\nI0204 14:22:05.126201 3676 log.go:181] (0xc00014e370) Reply frame received for 1\nI0204 14:22:05.126256 3676 log.go:181] (0xc00014e370) (0xc00069c460) Create stream\nI0204 14:22:05.126271 3676 log.go:181] (0xc00014e370) (0xc00069c460) Stream added, broadcasting: 3\nI0204 14:22:05.127298 3676 log.go:181] (0xc00014e370) Reply frame received for 3\nI0204 14:22:05.127368 3676 log.go:181] (0xc00014e370) (0xc0000ca3c0) Create stream\nI0204 14:22:05.127381 3676 log.go:181] (0xc00014e370) (0xc0000ca3c0) Stream added, broadcasting: 5\nI0204 14:22:05.128222 3676 log.go:181] (0xc00014e370) Reply frame received for 5\nI0204 14:22:05.211626 3676 log.go:181] (0xc00014e370) Data frame received for 5\nI0204 14:22:05.211659 3676 log.go:181] (0xc0000ca3c0) (5) Data frame handling\nI0204 14:22:05.211682 3676 log.go:181] (0xc0000ca3c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0204 14:22:05.269350 3676 log.go:181] (0xc00014e370) Data frame received for 3\nI0204 14:22:05.269380 3676 log.go:181] (0xc00069c460) (3) Data frame handling\nI0204 14:22:05.269408 3676 log.go:181] (0xc00069c460) (3) Data frame sent\nI0204 14:22:05.269420 3676 log.go:181] (0xc00014e370) Data frame received for 3\nI0204 14:22:05.269428 3676 log.go:181] (0xc00069c460) (3) Data frame handling\nI0204 14:22:05.269451 3676 log.go:181] (0xc00014e370) Data frame received for 5\nI0204 14:22:05.269463 3676 log.go:181] (0xc0000ca3c0) (5) Data frame handling\nI0204 14:22:05.271512 3676 log.go:181] (0xc00014e370) Data frame received for 1\nI0204 14:22:05.271524 3676 log.go:181] (0xc00019ac80) (1) Data frame handling\nI0204 14:22:05.271537 3676 log.go:181] (0xc00019ac80) (1) Data frame sent\nI0204 14:22:05.271636 3676 log.go:181] (0xc00014e370) (0xc00019ac80) Stream removed, broadcasting: 1\nI0204 14:22:05.271658 3676 log.go:181] (0xc00014e370) Go away received\nI0204 14:22:05.271923 3676 log.go:181] (0xc00014e370) (0xc00019ac80) Stream removed, broadcasting: 1\nI0204 14:22:05.271936 3676 log.go:181] (0xc00014e370) (0xc00069c460) Stream removed, broadcasting: 3\nI0204 14:22:05.271941 3676 log.go:181] (0xc00014e370) (0xc0000ca3c0) Stream removed, broadcasting: 5\n" Feb 4 14:22:05.277: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 4 14:22:05.277: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 4 14:22:15.316: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 4 14:22:25.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=statefulset-9855 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 4 14:22:25.636: INFO: stderr: "I0204 14:22:25.539175 3694 log.go:181] (0xc000016000) (0xc0007a21e0) Create stream\nI0204 14:22:25.539226 3694 log.go:181] (0xc000016000) (0xc0007a21e0) Stream added, broadcasting: 1\nI0204 14:22:25.541161 3694 log.go:181] (0xc000016000) Reply frame received for 1\nI0204 14:22:25.541204 3694 log.go:181] (0xc000016000) (0xc000eae000) Create stream\nI0204 14:22:25.541218 3694 log.go:181] (0xc000016000) (0xc000eae000) Stream added, broadcasting: 3\nI0204 14:22:25.541951 3694 log.go:181] (0xc000016000) Reply frame received for 3\nI0204 14:22:25.541984 3694 log.go:181] (0xc000016000) (0xc000530280) Create stream\nI0204 14:22:25.541996 3694 log.go:181] (0xc000016000) (0xc000530280) Stream added, broadcasting: 5\nI0204 14:22:25.542675 3694 log.go:181] (0xc000016000) Reply frame received for 5\nI0204 14:22:25.627768 3694 log.go:181] (0xc000016000) Data frame received for 3\nI0204 14:22:25.627808 3694 log.go:181] (0xc000eae000) (3) Data frame handling\nI0204 14:22:25.627822 3694 log.go:181] (0xc000eae000) (3) Data frame sent\nI0204 14:22:25.627833 3694 log.go:181] (0xc000016000) Data frame received for 3\nI0204 14:22:25.627846 3694 log.go:181] (0xc000eae000) (3) Data frame handling\nI0204 14:22:25.627906 3694 log.go:181] (0xc000016000) Data frame received for 5\nI0204 14:22:25.627931 3694 log.go:181] (0xc000530280) (5) Data frame handling\nI0204 14:22:25.627967 3694 log.go:181] (0xc000530280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0204 14:22:25.628104 3694 log.go:181] (0xc000016000) Data frame received for 5\nI0204 14:22:25.628141 3694 log.go:181] (0xc000530280) (5) Data frame handling\nI0204 14:22:25.630130 3694 log.go:181] (0xc000016000) Data frame received for 1\nI0204 14:22:25.630154 3694 log.go:181] (0xc0007a21e0) (1) Data frame handling\nI0204 14:22:25.630164 3694 log.go:181] (0xc0007a21e0) (1) Data frame sent\nI0204 14:22:25.630183 3694 log.go:181] (0xc000016000) (0xc0007a21e0) Stream removed, broadcasting: 1\nI0204 14:22:25.630215 3694 log.go:181] (0xc000016000) Go away received\nI0204 14:22:25.630707 3694 log.go:181] (0xc000016000) (0xc0007a21e0) Stream removed, broadcasting: 1\nI0204 14:22:25.630725 3694 log.go:181] (0xc000016000) (0xc000eae000) Stream removed, broadcasting: 3\nI0204 14:22:25.630734 3694 log.go:181] (0xc000016000) (0xc000530280) Stream removed, broadcasting: 5\n" Feb 4 14:22:25.636: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 4 14:22:25.636: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 4 14:22:35.657: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:22:35.657: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:22:35.657: INFO: Waiting for Pod statefulset-9855/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:22:35.657: INFO: Waiting for Pod statefulset-9855/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:22:45.665: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:22:45.665: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:22:45.665: INFO: Waiting for Pod statefulset-9855/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:22:45.665: INFO: Waiting for Pod statefulset-9855/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:22:55.915: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:22:55.915: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:22:55.915: INFO: Waiting for Pod statefulset-9855/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 4 14:23:05.746: INFO: Waiting for StatefulSet statefulset-9855/ss2 to complete update Feb 4 14:23:05.746: INFO: Waiting for Pod statefulset-9855/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Feb 4 14:23:15.664: INFO: Deleting all statefulset in ns statefulset-9855 Feb 4 14:23:15.667: INFO: Scaling statefulset ss2 to 0 Feb 4 14:25:55.690: INFO: Waiting for statefulset status.replicas updated to 0 Feb 4 14:25:55.692: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:25:55.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9855" for this suite. • [SLOW TEST:391.805 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":311,"completed":257,"skipped":4337,"failed":0} [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:25:55.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-projected-all-test-volume-7a278a57-e830-4fe4-a283-c1ca53326d50 STEP: Creating secret with name secret-projected-all-test-volume-77a81987-bac1-4906-8704-ec972d339743 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 4 14:25:55.953: INFO: Waiting up to 5m0s for pod "projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422" in namespace "projected-7600" to be "Succeeded or Failed" Feb 4 14:25:56.014: INFO: Pod "projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422": Phase="Pending", Reason="", readiness=false. Elapsed: 60.638586ms Feb 4 14:25:58.018: INFO: Pod "projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065187949s Feb 4 14:26:02.370: INFO: Pod "projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422": Phase="Pending", Reason="", readiness=false. Elapsed: 6.416683181s Feb 4 14:26:04.536: INFO: Pod "projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422": Phase="Pending", Reason="", readiness=false. Elapsed: 8.582971395s Feb 4 14:26:06.541: INFO: Pod "projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.587739733s STEP: Saw pod success Feb 4 14:26:06.541: INFO: Pod "projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422" satisfied condition "Succeeded or Failed" Feb 4 14:26:06.544: INFO: Trying to get logs from node latest-worker2 pod projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422 container projected-all-volume-test: STEP: delete the pod Feb 4 14:26:06.608: INFO: Waiting for pod projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422 to disappear Feb 4 14:26:06.636: INFO: Pod projected-volume-8128d68a-4ebb-4bd4-9fd8-8e5a511d5422 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:26:06.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7600" for this suite. • [SLOW TEST:10.923 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":311,"completed":258,"skipped":4337,"failed":0} S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:26:06.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating replication controller my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d Feb 4 14:26:06.845: INFO: Pod name my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d: Found 0 pods out of 1 Feb 4 14:26:11.862: INFO: Pod name my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d: Found 1 pods out of 1 Feb 4 14:26:11.862: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d" are running Feb 4 14:26:11.868: INFO: Pod "my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d-j2nwf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 14:26:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 14:26:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 14:26:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-04 14:26:06 +0000 UTC Reason: Message:}]) Feb 4 14:26:11.868: INFO: Trying to dial the pod Feb 4 14:26:16.881: INFO: Controller my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d: Got expected result from replica 1 [my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d-j2nwf]: "my-hostname-basic-1c653a6c-00ed-477f-9830-af266d3a027d-j2nwf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:26:16.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7370" for this suite. • [SLOW TEST:10.232 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":311,"completed":259,"skipped":4338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:26:16.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:27:17.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9558" for this suite. • [SLOW TEST:60.122 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":311,"completed":260,"skipped":4376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:27:17.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:27:17.151: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 4 14:27:20.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8417 --namespace=crd-publish-openapi-8417 create -f -' Feb 4 14:27:28.401: INFO: stderr: "" Feb 4 14:27:28.401: INFO: stdout: "e2e-test-crd-publish-openapi-1034-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 4 14:27:28.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8417 --namespace=crd-publish-openapi-8417 delete e2e-test-crd-publish-openapi-1034-crds test-cr' Feb 4 14:27:28.512: INFO: stderr: "" Feb 4 14:27:28.512: INFO: stdout: "e2e-test-crd-publish-openapi-1034-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 4 14:27:28.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8417 --namespace=crd-publish-openapi-8417 apply -f -' Feb 4 14:27:28.820: INFO: stderr: "" Feb 4 14:27:28.820: INFO: stdout: "e2e-test-crd-publish-openapi-1034-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 4 14:27:28.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8417 --namespace=crd-publish-openapi-8417 delete e2e-test-crd-publish-openapi-1034-crds test-cr' Feb 4 14:27:28.931: INFO: stderr: "" Feb 4 14:27:28.931: INFO: stdout: "e2e-test-crd-publish-openapi-1034-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 4 14:27:28.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8417 explain e2e-test-crd-publish-openapi-1034-crds' Feb 4 14:27:29.260: INFO: stderr: "" Feb 4 14:27:29.260: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1034-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:27:32.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8417" for this suite. • [SLOW TEST:15.797 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":311,"completed":261,"skipped":4413,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:27:32.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:27:36.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-659" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":311,"completed":262,"skipped":4417,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:27:37.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating projection with secret that has name projected-secret-test-map-0ffdab2d-500b-437b-84bc-a3cf5ce2e49a STEP: Creating a pod to test consume secrets Feb 4 14:27:37.560: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f" in namespace "projected-8594" to be "Succeeded or Failed" Feb 4 14:27:37.650: INFO: Pod "pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f": Phase="Pending", Reason="", readiness=false. Elapsed: 90.270127ms Feb 4 14:27:39.674: INFO: Pod "pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114036126s Feb 4 14:27:41.678: INFO: Pod "pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118428968s STEP: Saw pod success Feb 4 14:27:41.678: INFO: Pod "pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f" satisfied condition "Succeeded or Failed" Feb 4 14:27:41.683: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f container projected-secret-volume-test: STEP: delete the pod Feb 4 14:27:41.737: INFO: Waiting for pod pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f to disappear Feb 4 14:27:41.842: INFO: Pod pod-projected-secrets-e0c8ee2d-250d-400b-9f8a-96fffda5d44f no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:27:41.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8594" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":263,"skipped":4425,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:27:41.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 14:27:42.657: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 14:27:44.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045662, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045662, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045662, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045662, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:27:47.713: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:27:47.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9512" for this suite. STEP: Destroying namespace "webhook-9512-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:6.334 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":311,"completed":264,"skipped":4454,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:27:48.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 4 14:27:48.665: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 4 14:27:50.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-5d6d98d788\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:27:52.688: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045668, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-5d6d98d788\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:27:55.740: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:27:55.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:27:56.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-379" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.811 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":311,"completed":265,"skipped":4471,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:27:56.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:27:57.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343" in namespace "downward-api-4248" to be "Succeeded or Failed" Feb 4 14:27:57.109: INFO: Pod "downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343": Phase="Pending", Reason="", readiness=false. Elapsed: 28.665835ms Feb 4 14:27:59.114: INFO: Pod "downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033415311s Feb 4 14:28:01.118: INFO: Pod "downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343": Phase="Running", Reason="", readiness=true. Elapsed: 4.037593691s Feb 4 14:28:03.123: INFO: Pod "downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042513358s STEP: Saw pod success Feb 4 14:28:03.123: INFO: Pod "downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343" satisfied condition "Succeeded or Failed" Feb 4 14:28:03.126: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343 container client-container: STEP: delete the pod Feb 4 14:28:03.193: INFO: Waiting for pod downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343 to disappear Feb 4 14:28:03.210: INFO: Pod downwardapi-volume-9923a74c-86b9-4f5d-b83e-6ef7fca1b343 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:28:03.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4248" for this suite. • [SLOW TEST:6.222 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":311,"completed":266,"skipped":4475,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:28:03.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Feb 4 14:28:10.570: INFO: 10 pods remaining Feb 4 14:28:10.570: INFO: 10 pods has nil DeletionTimestamp Feb 4 14:28:10.570: INFO: Feb 4 14:28:11.655: INFO: 10 pods remaining Feb 4 14:28:11.655: INFO: 0 pods has nil DeletionTimestamp Feb 4 14:28:11.655: INFO: Feb 4 14:28:12.843: INFO: 0 pods remaining Feb 4 14:28:12.843: INFO: 0 pods has nil DeletionTimestamp Feb 4 14:28:12.844: INFO: Feb 4 14:28:13.622: INFO: 0 pods remaining Feb 4 14:28:13.622: INFO: 0 pods has nil DeletionTimestamp Feb 4 14:28:13.622: INFO: STEP: Gathering metrics W0204 14:28:14.947208 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 4 14:29:17.517: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:29:17.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3531" for this suite. • [SLOW TEST:74.309 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":311,"completed":267,"skipped":4496,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:29:17.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-map-9344d1c9-231f-450d-887f-21633f46db3f STEP: Creating a pod to test consume configMaps Feb 4 14:29:17.655: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e" in namespace "projected-4077" to be "Succeeded or Failed" Feb 4 14:29:17.670: INFO: Pod "pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.718874ms Feb 4 14:29:19.676: INFO: Pod "pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020972804s Feb 4 14:29:21.681: INFO: Pod "pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025935445s STEP: Saw pod success Feb 4 14:29:21.681: INFO: Pod "pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e" satisfied condition "Succeeded or Failed" Feb 4 14:29:21.684: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e container agnhost-container: STEP: delete the pod Feb 4 14:29:21.708: INFO: Waiting for pod pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e to disappear Feb 4 14:29:21.758: INFO: Pod pod-projected-configmaps-14748f33-c7af-494d-9bc3-62eb87fc023e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:29:21.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4077" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":311,"completed":268,"skipped":4502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:29:21.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:29:21.856: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:29:28.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5665" for this suite. • [SLOW TEST:6.440 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":311,"completed":269,"skipped":4551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:29:28.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: creating the pod Feb 4 14:29:28.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 create -f -' Feb 4 14:29:28.726: INFO: stderr: "" Feb 4 14:29:28.726: INFO: stdout: "pod/pause created\n" Feb 4 14:29:28.726: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 4 14:29:28.726: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3298" to be "running and ready" Feb 4 14:29:28.731: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.079038ms Feb 4 14:29:30.736: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010325908s Feb 4 14:29:32.740: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.01401002s Feb 4 14:29:32.740: INFO: Pod "pause" satisfied condition "running and ready" Feb 4 14:29:32.740: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: adding the label testing-label with value testing-label-value to a pod Feb 4 14:29:32.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 label pods pause testing-label=testing-label-value' Feb 4 14:29:32.853: INFO: stderr: "" Feb 4 14:29:32.853: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 4 14:29:32.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 get pod pause -L testing-label' Feb 4 14:29:32.953: INFO: stderr: "" Feb 4 14:29:32.953: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 4 14:29:32.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 label pods pause testing-label-' Feb 4 14:29:33.077: INFO: stderr: "" Feb 4 14:29:33.077: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 4 14:29:33.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 get pod pause -L testing-label' Feb 4 14:29:33.228: INFO: stderr: "" Feb 4 14:29:33.228: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 STEP: using delete to clean up resources Feb 4 14:29:33.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 delete --grace-period=0 --force -f -' Feb 4 14:29:33.362: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 4 14:29:33.362: INFO: stdout: "pod \"pause\" force deleted\n" Feb 4 14:29:33.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 get rc,svc -l name=pause --no-headers' Feb 4 14:29:33.453: INFO: stderr: "No resources found in kubectl-3298 namespace.\n" Feb 4 14:29:33.453: INFO: stdout: "" Feb 4 14:29:33.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-3298 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 4 14:29:33.741: INFO: stderr: "" Feb 4 14:29:33.741: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:29:33.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3298" for this suite. • [SLOW TEST:5.526 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1312 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":311,"completed":270,"skipped":4650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:29:33.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Feb 4 14:29:34.814: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Feb 4 14:29:36.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-5d6d98d788\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:29:38.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045774, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-5d6d98d788\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:29:41.928: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:29:41.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:29:43.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1659" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.610 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":311,"completed":271,"skipped":4690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:29:43.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating server pod server in namespace prestop-133 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-133 STEP: Deleting pre-stop pod Feb 4 14:29:56.565: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:29:56.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-133" for this suite. • [SLOW TEST:13.271 seconds] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":311,"completed":272,"skipped":4744,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:29:56.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod Feb 4 14:29:57.218: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:30:08.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2794" for this suite. • [SLOW TEST:11.475 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":311,"completed":273,"skipped":4746,"failed":0} [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:30:08.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching Feb 4 14:30:08.434: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating Feb 4 14:30:08.562: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:30:08.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-7100" for this suite. •{"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":311,"completed":274,"skipped":4746,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:30:08.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-4e5b03d2-1715-4948-bbfb-0240d993d6d9 STEP: Creating a pod to test consume configMaps Feb 4 14:30:08.809: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9" in namespace "projected-3134" to be "Succeeded or Failed" Feb 4 14:30:08.820: INFO: Pod "pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.395927ms Feb 4 14:30:10.897: INFO: Pod "pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087579744s Feb 4 14:30:12.901: INFO: Pod "pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091537789s STEP: Saw pod success Feb 4 14:30:12.901: INFO: Pod "pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9" satisfied condition "Succeeded or Failed" Feb 4 14:30:12.904: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9 container agnhost-container: STEP: delete the pod Feb 4 14:30:12.941: INFO: Waiting for pod pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9 to disappear Feb 4 14:30:12.987: INFO: Pod pod-projected-configmaps-37ceb1df-096f-47ae-88f8-cc14ab0941c9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:30:12.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3134" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":311,"completed":275,"skipped":4758,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:30:12.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Feb 4 14:30:17.139: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-7942 PodName:var-expansion-bdbb0e20-a3bc-475e-af5c-660744273c21 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:30:17.139: INFO: >>> kubeConfig: /root/.kube/config I0204 14:30:17.176167 7 log.go:181] (0xc000ab4840) (0xc0011d10e0) Create stream I0204 14:30:17.176204 7 log.go:181] (0xc000ab4840) (0xc0011d10e0) Stream added, broadcasting: 1 I0204 14:30:17.178794 7 log.go:181] (0xc000ab4840) Reply frame received for 1 I0204 14:30:17.178853 7 log.go:181] (0xc000ab4840) (0xc000aeea00) Create stream I0204 14:30:17.178874 7 log.go:181] (0xc000ab4840) (0xc000aeea00) Stream added, broadcasting: 3 I0204 14:30:17.179870 7 log.go:181] (0xc000ab4840) Reply frame received for 3 I0204 14:30:17.179904 7 log.go:181] (0xc000ab4840) (0xc0012501e0) Create stream I0204 14:30:17.179917 7 log.go:181] (0xc000ab4840) (0xc0012501e0) Stream added, broadcasting: 5 I0204 14:30:17.181058 7 log.go:181] (0xc000ab4840) Reply frame received for 5 I0204 14:30:17.252578 7 log.go:181] (0xc000ab4840) Data frame received for 5 I0204 14:30:17.252612 7 log.go:181] (0xc0012501e0) (5) Data frame handling I0204 14:30:17.252662 7 log.go:181] (0xc000ab4840) Data frame received for 3 I0204 14:30:17.252699 7 log.go:181] (0xc000aeea00) (3) Data frame handling I0204 14:30:17.254173 7 log.go:181] (0xc000ab4840) Data frame received for 1 I0204 14:30:17.254209 7 log.go:181] (0xc0011d10e0) (1) Data frame handling I0204 14:30:17.254227 7 log.go:181] (0xc0011d10e0) (1) Data frame sent I0204 14:30:17.254336 7 log.go:181] (0xc000ab4840) (0xc0011d10e0) Stream removed, broadcasting: 1 I0204 14:30:17.254390 7 log.go:181] (0xc000ab4840) Go away received I0204 14:30:17.254445 7 log.go:181] (0xc000ab4840) (0xc0011d10e0) Stream removed, broadcasting: 1 I0204 14:30:17.254463 7 log.go:181] (0xc000ab4840) (0xc000aeea00) Stream removed, broadcasting: 3 I0204 14:30:17.254480 7 log.go:181] (0xc000ab4840) (0xc0012501e0) Stream removed, broadcasting: 5 STEP: test for file in mounted path Feb 4 14:30:17.258: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-7942 PodName:var-expansion-bdbb0e20-a3bc-475e-af5c-660744273c21 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:30:17.258: INFO: >>> kubeConfig: /root/.kube/config I0204 14:30:17.288469 7 log.go:181] (0xc000ab5080) (0xc0011d1c20) Create stream I0204 14:30:17.288506 7 log.go:181] (0xc000ab5080) (0xc0011d1c20) Stream added, broadcasting: 1 I0204 14:30:17.290673 7 log.go:181] (0xc000ab5080) Reply frame received for 1 I0204 14:30:17.290710 7 log.go:181] (0xc000ab5080) (0xc000aeef00) Create stream I0204 14:30:17.290723 7 log.go:181] (0xc000ab5080) (0xc000aeef00) Stream added, broadcasting: 3 I0204 14:30:17.291747 7 log.go:181] (0xc000ab5080) Reply frame received for 3 I0204 14:30:17.291775 7 log.go:181] (0xc000ab5080) (0xc001250280) Create stream I0204 14:30:17.291787 7 log.go:181] (0xc000ab5080) (0xc001250280) Stream added, broadcasting: 5 I0204 14:30:17.292573 7 log.go:181] (0xc000ab5080) Reply frame received for 5 I0204 14:30:17.374261 7 log.go:181] (0xc000ab5080) Data frame received for 3 I0204 14:30:17.374297 7 log.go:181] (0xc000ab5080) Data frame received for 5 I0204 14:30:17.374325 7 log.go:181] (0xc001250280) (5) Data frame handling I0204 14:30:17.374376 7 log.go:181] (0xc000aeef00) (3) Data frame handling I0204 14:30:17.375872 7 log.go:181] (0xc000ab5080) Data frame received for 1 I0204 14:30:17.375910 7 log.go:181] (0xc0011d1c20) (1) Data frame handling I0204 14:30:17.375941 7 log.go:181] (0xc0011d1c20) (1) Data frame sent I0204 14:30:17.375970 7 log.go:181] (0xc000ab5080) (0xc0011d1c20) Stream removed, broadcasting: 1 I0204 14:30:17.376002 7 log.go:181] (0xc000ab5080) Go away received I0204 14:30:17.376109 7 log.go:181] (0xc000ab5080) (0xc0011d1c20) Stream removed, broadcasting: 1 I0204 14:30:17.376127 7 log.go:181] (0xc000ab5080) (0xc000aeef00) Stream removed, broadcasting: 3 I0204 14:30:17.376141 7 log.go:181] (0xc000ab5080) (0xc001250280) Stream removed, broadcasting: 5 STEP: updating the annotation value Feb 4 14:30:17.887: INFO: Successfully updated pod "var-expansion-bdbb0e20-a3bc-475e-af5c-660744273c21" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Feb 4 14:30:17.914: INFO: Deleting pod "var-expansion-bdbb0e20-a3bc-475e-af5c-660744273c21" in namespace "var-expansion-7942" Feb 4 14:30:17.919: INFO: Wait up to 5m0s for pod "var-expansion-bdbb0e20-a3bc-475e-af5c-660744273c21" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:31:51.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7942" for this suite. • [SLOW TEST:98.958 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":311,"completed":276,"skipped":4760,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:31:51.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0204 14:31:53.660605 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Feb 4 14:32:55.680: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:32:55.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2759" for this suite. • [SLOW TEST:63.737 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":311,"completed":277,"skipped":4770,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:32:55.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:32:55.799: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 4 14:32:55.832: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 4 14:33:00.845: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 4 14:33:00.845: INFO: Creating deployment "test-rolling-update-deployment" Feb 4 14:33:00.862: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 4 14:33:00.905: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 4 14:33:02.913: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 4 14:33:02.915: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045981, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045981, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045981, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045980, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-f66cf855\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:04.928: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 4 14:33:04.937: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9831 d105c89c-b28b-4fba-96ae-cf4b229060a5 2113121 1 2021-02-04 14:33:00 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2021-02-04 14:33:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-04 14:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005520608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-04 14:33:01 +0000 UTC,LastTransitionTime:2021-02-04 14:33:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-f66cf855" has successfully progressed.,LastUpdateTime:2021-02-04 14:33:04 +0000 UTC,LastTransitionTime:2021-02-04 14:33:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 4 14:33:04.940: INFO: New ReplicaSet "test-rolling-update-deployment-f66cf855" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-f66cf855 deployment-9831 ae96695c-0d88-4fa9-a54c-0a5b139779af 2113110 1 2021-02-04 14:33:00 +0000 UTC map[name:sample-pod pod-template-hash:f66cf855] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d105c89c-b28b-4fba-96ae-cf4b229060a5 0xc005520a5f 0xc005520a70}] [] [{kube-controller-manager Update apps/v1 2021-02-04 14:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d105c89c-b28b-4fba-96ae-cf4b229060a5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: f66cf855,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:f66cf855] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005520ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 4 14:33:04.940: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 4 14:33:04.940: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9831 e3df0947-b007-4470-b307-3f0a0ea16f31 2113120 2 2021-02-04 14:32:55 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d105c89c-b28b-4fba-96ae-cf4b229060a5 0xc00552095f 0xc005520970}] [] [{e2e.test Update apps/v1 2021-02-04 14:32:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-04 14:33:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d105c89c-b28b-4fba-96ae-cf4b229060a5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005520a08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 4 14:33:04.943: INFO: Pod "test-rolling-update-deployment-f66cf855-g49ws" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-f66cf855-g49ws test-rolling-update-deployment-f66cf855- deployment-9831 d618efc9-7001-4edc-ae8c-1c8ad2bef56d 2113109 0 2021-02-04 14:33:00 +0000 UTC map[name:sample-pod pod-template-hash:f66cf855] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-f66cf855 ae96695c-0d88-4fa9-a54c-0a5b139779af 0xc0054577bf 0xc0054577d0}] [] [{kube-controller-manager Update v1 2021-02-04 14:33:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ae96695c-0d88-4fa9-a54c-0a5b139779af\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 14:33:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.157\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wn54c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wn54c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wn54c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.157,StartTime:2021-02-04 14:33:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 14:33:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e,ContainerID:containerd://0c22927a8f53cf980fce16b81a5a7f3d0c61db46de755a2783598843bbd88b15,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:33:04.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9831" for this suite. • [SLOW TEST:9.265 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":311,"completed":278,"skipped":4775,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:33:04.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 4 14:33:05.180: INFO: Waiting up to 5m0s for pod "pod-e88e63d5-51a5-487e-a116-18a093ed239b" in namespace "emptydir-7193" to be "Succeeded or Failed" Feb 4 14:33:05.184: INFO: Pod "pod-e88e63d5-51a5-487e-a116-18a093ed239b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.731481ms Feb 4 14:33:07.188: INFO: Pod "pod-e88e63d5-51a5-487e-a116-18a093ed239b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007941931s Feb 4 14:33:09.193: INFO: Pod "pod-e88e63d5-51a5-487e-a116-18a093ed239b": Phase="Running", Reason="", readiness=true. Elapsed: 4.013339913s Feb 4 14:33:11.196: INFO: Pod "pod-e88e63d5-51a5-487e-a116-18a093ed239b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016265268s STEP: Saw pod success Feb 4 14:33:11.196: INFO: Pod "pod-e88e63d5-51a5-487e-a116-18a093ed239b" satisfied condition "Succeeded or Failed" Feb 4 14:33:11.199: INFO: Trying to get logs from node latest-worker2 pod pod-e88e63d5-51a5-487e-a116-18a093ed239b container test-container: STEP: delete the pod Feb 4 14:33:11.278: INFO: Waiting for pod pod-e88e63d5-51a5-487e-a116-18a093ed239b to disappear Feb 4 14:33:11.313: INFO: Pod pod-e88e63d5-51a5-487e-a116-18a093ed239b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:33:11.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7193" for this suite. • [SLOW TEST:6.366 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":279,"skipped":4775,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:33:11.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:33:11.426: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 4 14:33:16.432: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 4 14:33:16.432: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 4 14:33:18.437: INFO: Creating deployment "test-rollover-deployment" Feb 4 14:33:18.467: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 4 14:33:20.515: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 4 14:33:20.521: INFO: Ensure that both replica sets have 1 created replica Feb 4 14:33:20.527: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 4 14:33:20.536: INFO: Updating deployment test-rollover-deployment Feb 4 14:33:20.536: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 4 14:33:22.548: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 4 14:33:22.554: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 4 14:33:22.560: INFO: all replica sets need to contain the pod-template-hash label Feb 4 14:33:22.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046000, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-94f684966\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:24.587: INFO: all replica sets need to contain the pod-template-hash label Feb 4 14:33:24.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046004, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-94f684966\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:26.570: INFO: all replica sets need to contain the pod-template-hash label Feb 4 14:33:26.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046004, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-94f684966\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:28.569: INFO: all replica sets need to contain the pod-template-hash label Feb 4 14:33:28.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046004, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-94f684966\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:30.570: INFO: all replica sets need to contain the pod-template-hash label Feb 4 14:33:30.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046004, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-94f684966\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:32.569: INFO: all replica sets need to contain the pod-template-hash label Feb 4 14:33:32.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046004, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-94f684966\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:34.626: INFO: Feb 4 14:33:34.626: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046014, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748045998, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-94f684966\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 4 14:33:36.570: INFO: Feb 4 14:33:36.570: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Feb 4 14:33:36.580: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9778 62a52565-8358-4d78-b611-02279e02a421 2113309 2 2021-02-04 14:33:18 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2021-02-04 14:33:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-04 14:33:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0056942d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2021-02-04 14:33:18 +0000 UTC,LastTransitionTime:2021-02-04 14:33:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-94f684966" has successfully progressed.,LastUpdateTime:2021-02-04 14:33:34 +0000 UTC,LastTransitionTime:2021-02-04 14:33:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 4 14:33:36.584: INFO: New ReplicaSet "test-rollover-deployment-94f684966" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-94f684966 deployment-9778 143fc73f-b1ee-4f03-a7f8-d509685347d2 2113299 2 2021-02-04 14:33:20 +0000 UTC map[name:rollover-pod pod-template-hash:94f684966] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 62a52565-8358-4d78-b611-02279e02a421 0xc0051426c0 0xc0051426c1}] [] [{kube-controller-manager Update apps/v1 2021-02-04 14:33:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62a52565-8358-4d78-b611-02279e02a421\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 94f684966,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:94f684966] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.26 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005142738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 4 14:33:36.584: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 4 14:33:36.584: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9778 c585ac55-3e6d-420a-828e-08259bcee53c 2113308 2 2021-02-04 14:33:11 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 62a52565-8358-4d78-b611-02279e02a421 0xc0051424a7 0xc0051424a8}] [] [{e2e.test Update apps/v1 2021-02-04 14:33:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2021-02-04 14:33:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62a52565-8358-4d78-b611-02279e02a421\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005142548 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 4 14:33:36.584: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9778 c226c049-89df-40a0-b6b6-59fd29965759 2113262 2 2021-02-04 14:33:18 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 62a52565-8358-4d78-b611-02279e02a421 0xc0051425b7 0xc0051425b8}] [] [{kube-controller-manager Update apps/v1 2021-02-04 14:33:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"62a52565-8358-4d78-b611-02279e02a421\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005142658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 4 14:33:36.588: INFO: Pod "test-rollover-deployment-94f684966-9l7rg" is available: &Pod{ObjectMeta:{test-rollover-deployment-94f684966-9l7rg test-rollover-deployment-94f684966- deployment-9778 6b21a830-3f1a-4bce-bdac-b34c9a596524 2113277 0 2021-02-04 14:33:20 +0000 UTC map[name:rollover-pod pod-template-hash:94f684966] map[] [{apps/v1 ReplicaSet test-rollover-deployment-94f684966 143fc73f-b1ee-4f03-a7f8-d509685347d2 0xc005ce27d0 0xc005ce27d1}] [] [{kube-controller-manager Update v1 2021-02-04 14:33:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"143fc73f-b1ee-4f03-a7f8-d509685347d2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-02-04 14:33:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.159\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tw8hd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tw8hd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tw8hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-02-04 14:33:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.2.159,StartTime:2021-02-04 14:33:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-02-04 14:33:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.26,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e,ContainerID:containerd://f2e0e7635d232ac4a1957ae2483aecc3983c7a59fde95212c352341debee7bc3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:33:36.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9778" for this suite. • [SLOW TEST:25.274 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":311,"completed":280,"skipped":4793,"failed":0} [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:33:36.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8240 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8240;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8240 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8240;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8240.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8240.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8240.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8240.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8240.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8240.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.129.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.129.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.129.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.129.184_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8240 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8240;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8240 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8240;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8240.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8240.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8240.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8240.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8240.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8240.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8240.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8240.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8240.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 184.129.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.129.184_udp@PTR;check="$$(dig +tcp +noall +answer +search 184.129.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.129.184_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 14:33:43.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.080: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.084: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.087: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.090: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.127: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.129: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.131: INFO: Unable to read jessie_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.133: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.135: INFO: Unable to read jessie_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.138: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.140: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.457: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:43.731: INFO: Lookups using dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8240 wheezy_tcp@dns-test-service.dns-8240 wheezy_udp@dns-test-service.dns-8240.svc wheezy_tcp@dns-test-service.dns-8240.svc wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8240 jessie_tcp@dns-test-service.dns-8240 jessie_udp@dns-test-service.dns-8240.svc jessie_tcp@dns-test-service.dns-8240.svc jessie_udp@_http._tcp.dns-test-service.dns-8240.svc jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc] Feb 4 14:33:48.736: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.738: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.741: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.743: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.746: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.749: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.751: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.755: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.777: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.780: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.783: INFO: Unable to read jessie_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.786: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.789: INFO: Unable to read jessie_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.794: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.797: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:48.814: INFO: Lookups using dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8240 wheezy_tcp@dns-test-service.dns-8240 wheezy_udp@dns-test-service.dns-8240.svc wheezy_tcp@dns-test-service.dns-8240.svc wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8240 jessie_tcp@dns-test-service.dns-8240 jessie_udp@dns-test-service.dns-8240.svc jessie_tcp@dns-test-service.dns-8240.svc jessie_udp@_http._tcp.dns-test-service.dns-8240.svc jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc] Feb 4 14:33:53.768: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.771: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.781: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.784: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.786: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.789: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.834: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.837: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.839: INFO: Unable to read jessie_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.841: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.844: INFO: Unable to read jessie_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.846: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.849: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.852: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:53.899: INFO: Lookups using dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8240 wheezy_tcp@dns-test-service.dns-8240 wheezy_udp@dns-test-service.dns-8240.svc wheezy_tcp@dns-test-service.dns-8240.svc wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8240 jessie_tcp@dns-test-service.dns-8240 jessie_udp@dns-test-service.dns-8240.svc jessie_tcp@dns-test-service.dns-8240.svc jessie_udp@_http._tcp.dns-test-service.dns-8240.svc jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc] Feb 4 14:33:58.737: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.740: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.743: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.746: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.748: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.751: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.754: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.757: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.778: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.781: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.784: INFO: Unable to read jessie_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.790: INFO: Unable to read jessie_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.792: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.795: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:33:58.815: INFO: Lookups using dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8240 wheezy_tcp@dns-test-service.dns-8240 wheezy_udp@dns-test-service.dns-8240.svc wheezy_tcp@dns-test-service.dns-8240.svc wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8240 jessie_tcp@dns-test-service.dns-8240 jessie_udp@dns-test-service.dns-8240.svc jessie_tcp@dns-test-service.dns-8240.svc jessie_udp@_http._tcp.dns-test-service.dns-8240.svc jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc] Feb 4 14:34:03.736: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.739: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.742: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.745: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.748: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.750: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.753: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.756: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.776: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.779: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.782: INFO: Unable to read jessie_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.786: INFO: Unable to read jessie_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.789: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.794: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:03.810: INFO: Lookups using dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8240 wheezy_tcp@dns-test-service.dns-8240 wheezy_udp@dns-test-service.dns-8240.svc wheezy_tcp@dns-test-service.dns-8240.svc wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8240 jessie_tcp@dns-test-service.dns-8240 jessie_udp@dns-test-service.dns-8240.svc jessie_tcp@dns-test-service.dns-8240.svc jessie_udp@_http._tcp.dns-test-service.dns-8240.svc jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc] Feb 4 14:34:08.739: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.743: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.745: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.748: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.750: INFO: Unable to read wheezy_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.752: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.754: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.757: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.775: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.777: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.779: INFO: Unable to read jessie_udp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.781: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240 from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.784: INFO: Unable to read jessie_udp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.788: INFO: Unable to read jessie_tcp@dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.794: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc from pod dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d: the server could not find the requested resource (get pods dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d) Feb 4 14:34:08.812: INFO: Lookups using dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8240 wheezy_tcp@dns-test-service.dns-8240 wheezy_udp@dns-test-service.dns-8240.svc wheezy_tcp@dns-test-service.dns-8240.svc wheezy_udp@_http._tcp.dns-test-service.dns-8240.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8240.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8240 jessie_tcp@dns-test-service.dns-8240 jessie_udp@dns-test-service.dns-8240.svc jessie_tcp@dns-test-service.dns-8240.svc jessie_udp@_http._tcp.dns-test-service.dns-8240.svc jessie_tcp@_http._tcp.dns-test-service.dns-8240.svc] Feb 4 14:34:13.833: INFO: DNS probes using dns-8240/dns-test-c96e1c24-3ac6-464d-82db-f0d0db9bd03d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:34:14.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8240" for this suite. • [SLOW TEST:38.361 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":311,"completed":281,"skipped":4793,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:34:14.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:34:15.022: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:34:16.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8984" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":311,"completed":282,"skipped":4812,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:34:16.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-5f3f39d1-f32d-426b-b83b-14c014577471 STEP: Creating a pod to test consume configMaps Feb 4 14:34:17.020: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9" in namespace "projected-4988" to be "Succeeded or Failed" Feb 4 14:34:17.027: INFO: Pod "pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084902ms Feb 4 14:34:19.039: INFO: Pod "pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018369101s Feb 4 14:34:21.057: INFO: Pod "pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036750516s STEP: Saw pod success Feb 4 14:34:21.057: INFO: Pod "pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9" satisfied condition "Succeeded or Failed" Feb 4 14:34:21.060: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9 container agnhost-container: STEP: delete the pod Feb 4 14:34:21.165: INFO: Waiting for pod pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9 to disappear Feb 4 14:34:21.170: INFO: Pod pod-projected-configmaps-4c0bf29f-5b92-49bb-a511-a41c5ee0cdb9 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:34:21.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4988" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":283,"skipped":4819,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:34:21.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:740 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8358 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-8358 I0204 14:34:21.435639 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8358, replica count: 2 I0204 14:34:24.486112 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0204 14:34:27.486381 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 4 14:34:27.486: INFO: Creating new exec pod Feb 4 14:34:32.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8358 exec execpodqjxvb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 4 14:34:32.781: INFO: stderr: "I0204 14:34:32.687805 3945 log.go:181] (0xc00087b080) (0xc0009563c0) Create stream\nI0204 14:34:32.687878 3945 log.go:181] (0xc00087b080) (0xc0009563c0) Stream added, broadcasting: 1\nI0204 14:34:32.690889 3945 log.go:181] (0xc00087b080) Reply frame received for 1\nI0204 14:34:32.690946 3945 log.go:181] (0xc00087b080) (0xc000956460) Create stream\nI0204 14:34:32.690959 3945 log.go:181] (0xc00087b080) (0xc000956460) Stream added, broadcasting: 3\nI0204 14:34:32.692122 3945 log.go:181] (0xc00087b080) Reply frame received for 3\nI0204 14:34:32.692176 3945 log.go:181] (0xc00087b080) (0xc0003b8f00) Create stream\nI0204 14:34:32.692185 3945 log.go:181] (0xc00087b080) (0xc0003b8f00) Stream added, broadcasting: 5\nI0204 14:34:32.693361 3945 log.go:181] (0xc00087b080) Reply frame received for 5\nI0204 14:34:32.776027 3945 log.go:181] (0xc00087b080) Data frame received for 5\nI0204 14:34:32.776058 3945 log.go:181] (0xc0003b8f00) (5) Data frame handling\nI0204 14:34:32.776077 3945 log.go:181] (0xc0003b8f00) (5) Data frame sent\nI0204 14:34:32.776121 3945 log.go:181] (0xc00087b080) Data frame received for 5\nI0204 14:34:32.776134 3945 log.go:181] (0xc0003b8f00) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0204 14:34:32.776259 3945 log.go:181] (0xc00087b080) Data frame received for 3\nI0204 14:34:32.776274 3945 log.go:181] (0xc000956460) (3) Data frame handling\nI0204 14:34:32.777460 3945 log.go:181] (0xc00087b080) Data frame received for 1\nI0204 14:34:32.777474 3945 log.go:181] (0xc0009563c0) (1) Data frame handling\nI0204 14:34:32.777495 3945 log.go:181] (0xc0009563c0) (1) Data frame sent\nI0204 14:34:32.777513 3945 log.go:181] (0xc00087b080) (0xc0009563c0) Stream removed, broadcasting: 1\nI0204 14:34:32.777644 3945 log.go:181] (0xc00087b080) Go away received\nI0204 14:34:32.777820 3945 log.go:181] (0xc00087b080) (0xc0009563c0) Stream removed, broadcasting: 1\nI0204 14:34:32.777834 3945 log.go:181] (0xc00087b080) (0xc000956460) Stream removed, broadcasting: 3\nI0204 14:34:32.777843 3945 log.go:181] (0xc00087b080) (0xc0003b8f00) Stream removed, broadcasting: 5\n" Feb 4 14:34:32.782: INFO: stdout: "" Feb 4 14:34:32.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=services-8358 exec execpodqjxvb -- /bin/sh -x -c nc -zv -t -w 2 10.96.104.23 80' Feb 4 14:34:32.999: INFO: stderr: "I0204 14:34:32.932347 3963 log.go:181] (0xc00003ad10) (0xc00068c3c0) Create stream\nI0204 14:34:32.932398 3963 log.go:181] (0xc00003ad10) (0xc00068c3c0) Stream added, broadcasting: 1\nI0204 14:34:32.934165 3963 log.go:181] (0xc00003ad10) Reply frame received for 1\nI0204 14:34:32.934204 3963 log.go:181] (0xc00003ad10) (0xc00063c000) Create stream\nI0204 14:34:32.934213 3963 log.go:181] (0xc00003ad10) (0xc00063c000) Stream added, broadcasting: 3\nI0204 14:34:32.934936 3963 log.go:181] (0xc00003ad10) Reply frame received for 3\nI0204 14:34:32.934976 3963 log.go:181] (0xc00003ad10) (0xc0003bd400) Create stream\nI0204 14:34:32.935037 3963 log.go:181] (0xc00003ad10) (0xc0003bd400) Stream added, broadcasting: 5\nI0204 14:34:32.935961 3963 log.go:181] (0xc00003ad10) Reply frame received for 5\nI0204 14:34:32.991858 3963 log.go:181] (0xc00003ad10) Data frame received for 5\nI0204 14:34:32.991919 3963 log.go:181] (0xc0003bd400) (5) Data frame handling\nI0204 14:34:32.991944 3963 log.go:181] (0xc0003bd400) (5) Data frame sent\nI0204 14:34:32.991961 3963 log.go:181] (0xc00003ad10) Data frame received for 5\nI0204 14:34:32.991976 3963 log.go:181] (0xc0003bd400) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.104.23 80\nConnection to 10.96.104.23 80 port [tcp/http] succeeded!\nI0204 14:34:32.992083 3963 log.go:181] (0xc00003ad10) Data frame received for 3\nI0204 14:34:32.992138 3963 log.go:181] (0xc00063c000) (3) Data frame handling\nI0204 14:34:32.993424 3963 log.go:181] (0xc00003ad10) Data frame received for 1\nI0204 14:34:32.993451 3963 log.go:181] (0xc00068c3c0) (1) Data frame handling\nI0204 14:34:32.993468 3963 log.go:181] (0xc00068c3c0) (1) Data frame sent\nI0204 14:34:32.993484 3963 log.go:181] (0xc00003ad10) (0xc00068c3c0) Stream removed, broadcasting: 1\nI0204 14:34:32.993570 3963 log.go:181] (0xc00003ad10) Go away received\nI0204 14:34:32.993934 3963 log.go:181] (0xc00003ad10) (0xc00068c3c0) Stream removed, broadcasting: 1\nI0204 14:34:32.993951 3963 log.go:181] (0xc00003ad10) (0xc00063c000) Stream removed, broadcasting: 3\nI0204 14:34:32.993960 3963 log.go:181] (0xc00003ad10) (0xc0003bd400) Stream removed, broadcasting: 5\n" Feb 4 14:34:32.999: INFO: stdout: "" Feb 4 14:34:32.999: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:34:33.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8358" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:744 • [SLOW TEST:11.881 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":311,"completed":284,"skipped":4837,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:34:33.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 14:34:33.690: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 14:34:35.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046073, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046073, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046073, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046073, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:34:38.737: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 4 14:34:45.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=webhook-2938 attach --namespace=webhook-2938 to-be-attached-pod -i -c=container1' Feb 4 14:34:45.257: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:34:45.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2938" for this suite. STEP: Destroying namespace "webhook-2938-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:12.403 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":311,"completed":285,"skipped":4862,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:34:45.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name projected-configmap-test-volume-0c0e0884-e5ca-4d91-88e6-4e9e3288347d STEP: Creating a pod to test consume configMaps Feb 4 14:34:45.617: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0" in namespace "projected-1055" to be "Succeeded or Failed" Feb 4 14:34:45.621: INFO: Pod "pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.56547ms Feb 4 14:34:47.690: INFO: Pod "pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072806505s Feb 4 14:34:49.698: INFO: Pod "pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080771283s STEP: Saw pod success Feb 4 14:34:49.698: INFO: Pod "pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0" satisfied condition "Succeeded or Failed" Feb 4 14:34:49.701: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0 container agnhost-container: STEP: delete the pod Feb 4 14:34:49.782: INFO: Waiting for pod pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0 to disappear Feb 4 14:34:49.793: INFO: Pod pod-projected-configmaps-695e9452-0c98-4083-8a8b-11b05f0fdbf0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:34:49.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1055" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":311,"completed":286,"skipped":4889,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:34:49.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:34:58.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4182" for this suite. • [SLOW TEST:8.280 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":311,"completed":287,"skipped":4896,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:34:58.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:35:09.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4920" for this suite. • [SLOW TEST:11.149 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":311,"completed":288,"skipped":4901,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:35:09.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 4 14:35:13.915: INFO: Successfully updated pod "pod-update-fd4cbbc2-431f-45e4-8f2c-242459e71051" STEP: verifying the updated pod is in kubernetes Feb 4 14:35:13.929: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:35:13.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9827" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":311,"completed":289,"skipped":4903,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:35:13.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1455.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1455.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 14:35:20.249: INFO: DNS probes using dns-1455/dns-test-3a44b400-df47-4d7a-9947-82ff901c3c6e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:35:20.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1455" for this suite. • [SLOW TEST:6.411 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":311,"completed":290,"skipped":4913,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:35:20.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 4 14:35:22.378: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 4 14:35:24.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046122, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046122, loc:(*time.Location)(0x7886c60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046122, loc:(*time.Location)(0x7886c60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63748046122, loc:(*time.Location)(0x7886c60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-7fd5fddcbd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 4 14:35:27.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:35:27.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2033" for this suite. STEP: Destroying namespace "webhook-2033-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:7.413 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":311,"completed":291,"skipped":4913,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:35:27.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir volume type on node default medium Feb 4 14:35:27.890: INFO: Waiting up to 5m0s for pod "pod-99bb425b-4262-422c-ad47-778914a20d4d" in namespace "emptydir-8539" to be "Succeeded or Failed" Feb 4 14:35:27.916: INFO: Pod "pod-99bb425b-4262-422c-ad47-778914a20d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.040848ms Feb 4 14:35:29.924: INFO: Pod "pod-99bb425b-4262-422c-ad47-778914a20d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033609503s Feb 4 14:35:31.927: INFO: Pod "pod-99bb425b-4262-422c-ad47-778914a20d4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037083062s Feb 4 14:35:33.938: INFO: Pod "pod-99bb425b-4262-422c-ad47-778914a20d4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048140664s STEP: Saw pod success Feb 4 14:35:33.938: INFO: Pod "pod-99bb425b-4262-422c-ad47-778914a20d4d" satisfied condition "Succeeded or Failed" Feb 4 14:35:33.941: INFO: Trying to get logs from node latest-worker2 pod pod-99bb425b-4262-422c-ad47-778914a20d4d container test-container: STEP: delete the pod Feb 4 14:35:33.991: INFO: Waiting for pod pod-99bb425b-4262-422c-ad47-778914a20d4d to disappear Feb 4 14:35:34.043: INFO: Pod pod-99bb425b-4262-422c-ad47-778914a20d4d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:35:34.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8539" for this suite. • [SLOW TEST:6.291 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":292,"skipped":4922,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:35:34.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Performing setup for networking test in namespace pod-network-test-2442 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 4 14:35:34.108: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 4 14:35:34.196: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 14:35:36.252: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 4 14:35:38.200: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:40.201: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:42.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:44.201: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:46.201: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:48.201: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:50.201: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:52.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:54.200: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:56.202: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 4 14:35:58.202: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 4 14:35:58.207: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 4 14:36:02.303: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Feb 4 14:36:02.303: INFO: Going to poll 10.244.2.165 on port 8080 at least 0 times, with a maximum of 34 tries before failing Feb 4 14:36:02.305: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.165:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2442 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:36:02.305: INFO: >>> kubeConfig: /root/.kube/config I0204 14:36:02.347760 7 log.go:181] (0xc0009d33f0) (0xc002ee1e00) Create stream I0204 14:36:02.347801 7 log.go:181] (0xc0009d33f0) (0xc002ee1e00) Stream added, broadcasting: 1 I0204 14:36:02.350218 7 log.go:181] (0xc0009d33f0) Reply frame received for 1 I0204 14:36:02.350252 7 log.go:181] (0xc0009d33f0) (0xc002ee1ea0) Create stream I0204 14:36:02.350263 7 log.go:181] (0xc0009d33f0) (0xc002ee1ea0) Stream added, broadcasting: 3 I0204 14:36:02.351199 7 log.go:181] (0xc0009d33f0) Reply frame received for 3 I0204 14:36:02.351232 7 log.go:181] (0xc0009d33f0) (0xc003ff7d60) Create stream I0204 14:36:02.351245 7 log.go:181] (0xc0009d33f0) (0xc003ff7d60) Stream added, broadcasting: 5 I0204 14:36:02.352138 7 log.go:181] (0xc0009d33f0) Reply frame received for 5 I0204 14:36:02.420369 7 log.go:181] (0xc0009d33f0) Data frame received for 3 I0204 14:36:02.420396 7 log.go:181] (0xc002ee1ea0) (3) Data frame handling I0204 14:36:02.420412 7 log.go:181] (0xc002ee1ea0) (3) Data frame sent I0204 14:36:02.420531 7 log.go:181] (0xc0009d33f0) Data frame received for 3 I0204 14:36:02.420559 7 log.go:181] (0xc002ee1ea0) (3) Data frame handling I0204 14:36:02.420591 7 log.go:181] (0xc0009d33f0) Data frame received for 5 I0204 14:36:02.420614 7 log.go:181] (0xc003ff7d60) (5) Data frame handling I0204 14:36:02.422864 7 log.go:181] (0xc0009d33f0) Data frame received for 1 I0204 14:36:02.422888 7 log.go:181] (0xc002ee1e00) (1) Data frame handling I0204 14:36:02.422908 7 log.go:181] (0xc002ee1e00) (1) Data frame sent I0204 14:36:02.422941 7 log.go:181] (0xc0009d33f0) (0xc002ee1e00) Stream removed, broadcasting: 1 I0204 14:36:02.422962 7 log.go:181] (0xc0009d33f0) Go away received I0204 14:36:02.423066 7 log.go:181] (0xc0009d33f0) (0xc002ee1e00) Stream removed, broadcasting: 1 I0204 14:36:02.423086 7 log.go:181] (0xc0009d33f0) (0xc002ee1ea0) Stream removed, broadcasting: 3 I0204 14:36:02.423095 7 log.go:181] (0xc0009d33f0) (0xc003ff7d60) Stream removed, broadcasting: 5 Feb 4 14:36:02.423: INFO: Found all 1 expected endpoints: [netserver-0] Feb 4 14:36:02.423: INFO: Going to poll 10.244.1.190 on port 8080 at least 0 times, with a maximum of 34 tries before failing Feb 4 14:36:02.426: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.190:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2442 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Feb 4 14:36:02.426: INFO: >>> kubeConfig: /root/.kube/config I0204 14:36:02.484007 7 log.go:181] (0xc0009d3ad0) (0xc0031ab0e0) Create stream I0204 14:36:02.484042 7 log.go:181] (0xc0009d3ad0) (0xc0031ab0e0) Stream added, broadcasting: 1 I0204 14:36:02.486168 7 log.go:181] (0xc0009d3ad0) Reply frame received for 1 I0204 14:36:02.486197 7 log.go:181] (0xc0009d3ad0) (0xc0031ab180) Create stream I0204 14:36:02.486206 7 log.go:181] (0xc0009d3ad0) (0xc0031ab180) Stream added, broadcasting: 3 I0204 14:36:02.487070 7 log.go:181] (0xc0009d3ad0) Reply frame received for 3 I0204 14:36:02.487108 7 log.go:181] (0xc0009d3ad0) (0xc003b2c780) Create stream I0204 14:36:02.487123 7 log.go:181] (0xc0009d3ad0) (0xc003b2c780) Stream added, broadcasting: 5 I0204 14:36:02.488094 7 log.go:181] (0xc0009d3ad0) Reply frame received for 5 I0204 14:36:02.554141 7 log.go:181] (0xc0009d3ad0) Data frame received for 3 I0204 14:36:02.554166 7 log.go:181] (0xc0031ab180) (3) Data frame handling I0204 14:36:02.554182 7 log.go:181] (0xc0031ab180) (3) Data frame sent I0204 14:36:02.554189 7 log.go:181] (0xc0009d3ad0) Data frame received for 3 I0204 14:36:02.554195 7 log.go:181] (0xc0031ab180) (3) Data frame handling I0204 14:36:02.554302 7 log.go:181] (0xc0009d3ad0) Data frame received for 5 I0204 14:36:02.554333 7 log.go:181] (0xc003b2c780) (5) Data frame handling I0204 14:36:02.555992 7 log.go:181] (0xc0009d3ad0) Data frame received for 1 I0204 14:36:02.556015 7 log.go:181] (0xc0031ab0e0) (1) Data frame handling I0204 14:36:02.556031 7 log.go:181] (0xc0031ab0e0) (1) Data frame sent I0204 14:36:02.556045 7 log.go:181] (0xc0009d3ad0) (0xc0031ab0e0) Stream removed, broadcasting: 1 I0204 14:36:02.556092 7 log.go:181] (0xc0009d3ad0) Go away received I0204 14:36:02.556136 7 log.go:181] (0xc0009d3ad0) (0xc0031ab0e0) Stream removed, broadcasting: 1 I0204 14:36:02.556161 7 log.go:181] (0xc0009d3ad0) (0xc0031ab180) Stream removed, broadcasting: 3 I0204 14:36:02.556183 7 log.go:181] (0xc0009d3ad0) (0xc003b2c780) Stream removed, broadcasting: 5 Feb 4 14:36:02.556: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:02.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2442" for this suite. • [SLOW TEST:28.512 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:27 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":293,"skipped":4944,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:02.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating configMap with name configmap-test-volume-ad1ef2c2-0ecd-4821-92a3-4d9e04a9515c STEP: Creating a pod to test consume configMaps Feb 4 14:36:02.680: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8" in namespace "configmap-5145" to be "Succeeded or Failed" Feb 4 14:36:02.711: INFO: Pod "pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 31.331973ms Feb 4 14:36:04.787: INFO: Pod "pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107642827s Feb 4 14:36:06.793: INFO: Pod "pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8": Phase="Running", Reason="", readiness=true. Elapsed: 4.112935552s Feb 4 14:36:09.057: INFO: Pod "pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.377358545s STEP: Saw pod success Feb 4 14:36:09.057: INFO: Pod "pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8" satisfied condition "Succeeded or Failed" Feb 4 14:36:09.060: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8 container agnhost-container: STEP: delete the pod Feb 4 14:36:09.269: INFO: Waiting for pod pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8 to disappear Feb 4 14:36:09.331: INFO: Pod pod-configmaps-1d64087a-ae9d-4272-b200-1f59e30cc8b8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:09.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5145" for this suite. • [SLOW TEST:6.985 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":294,"skipped":4957,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:09.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 4 14:36:09.837: INFO: Waiting up to 5m0s for pod "pod-ef11ee34-7432-4e44-9aa1-befe91de4b53" in namespace "emptydir-9162" to be "Succeeded or Failed" Feb 4 14:36:10.045: INFO: Pod "pod-ef11ee34-7432-4e44-9aa1-befe91de4b53": Phase="Pending", Reason="", readiness=false. Elapsed: 208.312074ms Feb 4 14:36:12.049: INFO: Pod "pod-ef11ee34-7432-4e44-9aa1-befe91de4b53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212278513s Feb 4 14:36:14.098: INFO: Pod "pod-ef11ee34-7432-4e44-9aa1-befe91de4b53": Phase="Running", Reason="", readiness=true. Elapsed: 4.261700114s Feb 4 14:36:16.103: INFO: Pod "pod-ef11ee34-7432-4e44-9aa1-befe91de4b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.266082973s STEP: Saw pod success Feb 4 14:36:16.103: INFO: Pod "pod-ef11ee34-7432-4e44-9aa1-befe91de4b53" satisfied condition "Succeeded or Failed" Feb 4 14:36:16.106: INFO: Trying to get logs from node latest-worker pod pod-ef11ee34-7432-4e44-9aa1-befe91de4b53 container test-container: STEP: delete the pod Feb 4 14:36:16.159: INFO: Waiting for pod pod-ef11ee34-7432-4e44-9aa1-befe91de4b53 to disappear Feb 4 14:36:16.175: INFO: Pod pod-ef11ee34-7432-4e44-9aa1-befe91de4b53 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:16.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9162" for this suite. • [SLOW TEST:6.632 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:45 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":295,"skipped":4962,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:16.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Create set of pods Feb 4 14:36:16.314: INFO: created test-pod-1 Feb 4 14:36:16.318: INFO: created test-pod-2 Feb 4 14:36:16.325: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:16.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-185" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":311,"completed":296,"skipped":4976,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:16.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward API volume plugin Feb 4 14:36:16.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37" in namespace "downward-api-723" to be "Succeeded or Failed" Feb 4 14:36:16.679: INFO: Pod "downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37": Phase="Pending", Reason="", readiness=false. Elapsed: 58.841538ms Feb 4 14:36:18.682: INFO: Pod "downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062649874s Feb 4 14:36:20.687: INFO: Pod "downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066960416s STEP: Saw pod success Feb 4 14:36:20.687: INFO: Pod "downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37" satisfied condition "Succeeded or Failed" Feb 4 14:36:20.690: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37 container client-container: STEP: delete the pod Feb 4 14:36:20.728: INFO: Waiting for pod downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37 to disappear Feb 4 14:36:20.762: INFO: Pod downwardapi-volume-f29b0fd6-3949-47cf-9e04-abfd228c3e37 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:20.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-723" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":311,"completed":297,"skipped":4993,"failed":0} ------------------------------ [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:20.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a Pod with a static label STEP: watching for Pod to be ready Feb 4 14:36:20.933: INFO: observed Pod pod-test in namespace pods-660 in phase Pending with labels: map[test-pod-static:true] & conditions [] Feb 4 14:36:20.953: INFO: observed Pod pod-test in namespace pods-660 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:20 +0000 UTC }] Feb 4 14:36:20.990: INFO: observed Pod pod-test in namespace pods-660 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:20 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:20 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:20 +0000 UTC }] Feb 4 14:36:24.160: INFO: Found Pod pod-test in namespace pods-660 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-04 14:36:20 +0000 UTC }] STEP: patching the Pod with a new Label and updated data Feb 4 14:36:24.167: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted Feb 4 14:36:24.240: INFO: observed event type ADDED Feb 4 14:36:24.240: INFO: observed event type MODIFIED Feb 4 14:36:24.240: INFO: observed event type MODIFIED Feb 4 14:36:24.240: INFO: observed event type MODIFIED Feb 4 14:36:24.240: INFO: observed event type MODIFIED Feb 4 14:36:24.240: INFO: observed event type MODIFIED Feb 4 14:36:24.241: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:24.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-660" for this suite. •{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":311,"completed":298,"skipped":4993,"failed":0} ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:24.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:36:24.772: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-bf8399cc-22be-4523-87be-88027dcd31e9" in namespace "security-context-test-160" to be "Succeeded or Failed" Feb 4 14:36:24.821: INFO: Pod "busybox-privileged-false-bf8399cc-22be-4523-87be-88027dcd31e9": Phase="Pending", Reason="", readiness=false. Elapsed: 49.185916ms Feb 4 14:36:26.824: INFO: Pod "busybox-privileged-false-bf8399cc-22be-4523-87be-88027dcd31e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052154029s Feb 4 14:36:28.828: INFO: Pod "busybox-privileged-false-bf8399cc-22be-4523-87be-88027dcd31e9": Phase="Running", Reason="", readiness=true. Elapsed: 4.056231152s Feb 4 14:36:30.832: INFO: Pod "busybox-privileged-false-bf8399cc-22be-4523-87be-88027dcd31e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060046623s Feb 4 14:36:30.832: INFO: Pod "busybox-privileged-false-bf8399cc-22be-4523-87be-88027dcd31e9" satisfied condition "Succeeded or Failed" Feb 4 14:36:30.838: INFO: Got logs for pod "busybox-privileged-false-bf8399cc-22be-4523-87be-88027dcd31e9": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:30.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-160" for this suite. • [SLOW TEST:6.523 seconds] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":311,"completed":299,"skipped":4993,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:30.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating secret with name secret-test-39bf2fce-8586-4ae0-9bf3-91bd4a2016b6 STEP: Creating a pod to test consume secrets Feb 4 14:36:30.974: INFO: Waiting up to 5m0s for pod "pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004" in namespace "secrets-9748" to be "Succeeded or Failed" Feb 4 14:36:30.977: INFO: Pod "pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.725262ms Feb 4 14:36:32.983: INFO: Pod "pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008873594s Feb 4 14:36:34.996: INFO: Pod "pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022436508s STEP: Saw pod success Feb 4 14:36:34.996: INFO: Pod "pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004" satisfied condition "Succeeded or Failed" Feb 4 14:36:34.999: INFO: Trying to get logs from node latest-worker pod pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004 container secret-volume-test: STEP: delete the pod Feb 4 14:36:35.037: INFO: Waiting for pod pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004 to disappear Feb 4 14:36:35.062: INFO: Pod pod-secrets-0c896c92-d42a-4fd7-93a7-425b58e3c004 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:35.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9748" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":311,"completed":300,"skipped":5022,"failed":0} SSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:35.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:35.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-802" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":311,"completed":301,"skipped":5026,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:35.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:36:35.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4862" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":311,"completed":302,"skipped":5043,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:36:35.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5392.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5392.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 14:36:41.858: INFO: DNS probes using dns-test-272a92a6-f511-48a4-9a05-914f13f2def9 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5392.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5392.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 14:36:50.017: INFO: File wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:36:50.021: INFO: File jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:36:50.021: INFO: Lookups using dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd failed for: [wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local] Feb 4 14:36:55.026: INFO: File wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:36:55.030: INFO: File jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:36:55.030: INFO: Lookups using dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd failed for: [wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local] Feb 4 14:37:00.027: INFO: File wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:37:00.030: INFO: File jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:37:00.030: INFO: Lookups using dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd failed for: [wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local] Feb 4 14:37:05.027: INFO: File wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:37:05.031: INFO: File jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:37:05.031: INFO: Lookups using dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd failed for: [wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local] Feb 4 14:37:10.026: INFO: File wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local from pod dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 4 14:37:10.030: INFO: Lookups using dns-5392/dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd failed for: [wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local] Feb 4 14:37:15.029: INFO: DNS probes using dns-test-2334d7f7-5f59-498a-9dbe-3f200a6621dd succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5392.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5392.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5392.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5392.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 4 14:37:23.936: INFO: DNS probes using dns-test-72e72997-2860-481d-9e9c-353e5da7727d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:37:24.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5392" for this suite. • [SLOW TEST:48.930 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":311,"completed":303,"skipped":5049,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:37:24.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 4 14:37:24.740: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 4 14:37:29.743: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:37:30.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5123" for this suite. • [SLOW TEST:6.205 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":311,"completed":304,"skipped":5106,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:37:30.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Creating a pod to test downward api env vars Feb 4 14:37:31.138: INFO: Waiting up to 5m0s for pod "downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29" in namespace "downward-api-6734" to be "Succeeded or Failed" Feb 4 14:37:31.149: INFO: Pod "downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29": Phase="Pending", Reason="", readiness=false. Elapsed: 11.184996ms Feb 4 14:37:33.201: INFO: Pod "downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063073304s Feb 4 14:37:35.207: INFO: Pod "downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069002678s STEP: Saw pod success Feb 4 14:37:35.207: INFO: Pod "downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29" satisfied condition "Succeeded or Failed" Feb 4 14:37:35.210: INFO: Trying to get logs from node latest-worker pod downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29 container dapi-container: STEP: delete the pod Feb 4 14:37:35.526: INFO: Waiting for pod downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29 to disappear Feb 4 14:37:35.531: INFO: Pod downward-api-04dc8812-b4dc-4bee-853c-c1db6f255c29 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:37:35.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6734" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":311,"completed":305,"skipped":5152,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:37:35.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:37:39.654: INFO: Deleting pod "var-expansion-62687649-9a02-434f-bc31-00359e70c2c6" in namespace "var-expansion-2660" Feb 4 14:37:39.660: INFO: Wait up to 5m0s for pod "var-expansion-62687649-9a02-434f-bc31-00359e70c2c6" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:38:01.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2660" for this suite. • [SLOW TEST:26.172 seconds] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:635 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":311,"completed":306,"skipped":5159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:38:01.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:38:01.886: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:38:02.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8209" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":311,"completed":307,"skipped":5205,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:38:02.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: Starting the proxy Feb 4 14:38:02.626: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-1958 proxy --unix-socket=/tmp/kubectl-proxy-unix501906309/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:38:02.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1958" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":311,"completed":308,"skipped":5206,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:38:02.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 4 14:38:02.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2065 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Feb 4 14:38:05.973: INFO: stderr: "" Feb 4 14:38:05.973: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Feb 4 14:38:05.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2065 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Feb 4 14:38:06.406: INFO: stderr: "" Feb 4 14:38:06.407: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Feb 4 14:38:06.410: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=kubectl-2065 delete pods e2e-test-httpd-pod' Feb 4 14:39:01.229: INFO: stderr: "" Feb 4 14:39:01.229: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:39:01.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2065" for this suite. • [SLOW TEST:58.602 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:909 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":311,"completed":309,"skipped":5210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:39:01.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 Feb 4 14:39:01.370: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 4 14:39:04.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 create -f -' Feb 4 14:39:08.116: INFO: stderr: "" Feb 4 14:39:08.116: INFO: stdout: "e2e-test-crd-publish-openapi-9331-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 4 14:39:08.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 delete e2e-test-crd-publish-openapi-9331-crds test-foo' Feb 4 14:39:08.222: INFO: stderr: "" Feb 4 14:39:08.222: INFO: stdout: "e2e-test-crd-publish-openapi-9331-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 4 14:39:08.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 apply -f -' Feb 4 14:39:08.557: INFO: stderr: "" Feb 4 14:39:08.557: INFO: stdout: "e2e-test-crd-publish-openapi-9331-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 4 14:39:08.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 delete e2e-test-crd-publish-openapi-9331-crds test-foo' Feb 4 14:39:08.672: INFO: stderr: "" Feb 4 14:39:08.672: INFO: stdout: "e2e-test-crd-publish-openapi-9331-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 4 14:39:08.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 create -f -' Feb 4 14:39:08.988: INFO: rc: 1 Feb 4 14:39:08.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 apply -f -' Feb 4 14:39:09.266: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 4 14:39:09.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 create -f -' Feb 4 14:39:09.566: INFO: rc: 1 Feb 4 14:39:09.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 --namespace=crd-publish-openapi-2707 apply -f -' Feb 4 14:39:09.900: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 4 14:39:09.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 explain e2e-test-crd-publish-openapi-9331-crds' Feb 4 14:39:10.194: INFO: stderr: "" Feb 4 14:39:10.194: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 4 14:39:10.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 explain e2e-test-crd-publish-openapi-9331-crds.metadata' Feb 4 14:39:10.498: INFO: stderr: "" Feb 4 14:39:10.498: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 4 14:39:10.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 explain e2e-test-crd-publish-openapi-9331-crds.spec' Feb 4 14:39:10.805: INFO: stderr: "" Feb 4 14:39:10.805: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 4 14:39:10.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 explain e2e-test-crd-publish-openapi-9331-crds.spec.bars' Feb 4 14:39:11.119: INFO: stderr: "" Feb 4 14:39:11.119: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 4 14:39:11.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:36371 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2707 explain e2e-test-crd-publish-openapi-9331-crds.spec.bars2' Feb 4 14:39:11.404: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:39:14.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2707" for this suite. • [SLOW TEST:13.661 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":311,"completed":310,"skipped":5239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Feb 4 14:39:14.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Feb 4 14:39:15.073: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 4 14:39:15.080: INFO: Waiting for terminating namespaces to be deleted... Feb 4 14:39:15.083: INFO: Logging pods the apiserver thinks is on node latest-worker before test Feb 4 14:39:15.088: INFO: chaos-controller-manager-69c479c674-tdrls from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.088: INFO: Container chaos-mesh ready: true, restart count 0 Feb 4 14:39:15.088: INFO: chaos-daemon-vkxzr from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.088: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 14:39:15.088: INFO: coredns-74ff55c5b-zzl9d from kube-system started at 2021-02-04 13:09:59 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.088: INFO: Container coredns ready: true, restart count 0 Feb 4 14:39:15.088: INFO: kindnet-5bf5g from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.088: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 14:39:15.088: INFO: kube-proxy-f59c8 from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.088: INFO: Container kube-proxy ready: true, restart count 0 Feb 4 14:39:15.088: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Feb 4 14:39:15.093: INFO: chaos-daemon-g67vf from default started at 2021-01-26 11:59:56 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.093: INFO: Container chaos-daemon ready: true, restart count 0 Feb 4 14:39:15.093: INFO: coredns-74ff55c5b-674bk from kube-system started at 2021-02-04 13:09:59 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.093: INFO: Container coredns ready: true, restart count 0 Feb 4 14:39:15.093: INFO: kindnet-98jtw from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.093: INFO: Container kindnet-cni ready: true, restart count 0 Feb 4 14:39:15.093: INFO: kube-proxy-skm7x from kube-system started at 2021-01-26 08:08:41 +0000 UTC (1 container statuses recorded) Feb 4 14:39:15.093: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Feb 4 14:39:15.209: INFO: Pod chaos-controller-manager-69c479c674-tdrls requesting resource cpu=25m on Node latest-worker Feb 4 14:39:15.209: INFO: Pod chaos-daemon-g67vf requesting resource cpu=0m on Node latest-worker2 Feb 4 14:39:15.209: INFO: Pod chaos-daemon-vkxzr requesting resource cpu=0m on Node latest-worker Feb 4 14:39:15.209: INFO: Pod coredns-74ff55c5b-674bk requesting resource cpu=100m on Node latest-worker2 Feb 4 14:39:15.209: INFO: Pod coredns-74ff55c5b-zzl9d requesting resource cpu=100m on Node latest-worker Feb 4 14:39:15.209: INFO: Pod kindnet-5bf5g requesting resource cpu=100m on Node latest-worker Feb 4 14:39:15.209: INFO: Pod kindnet-98jtw requesting resource cpu=100m on Node latest-worker2 Feb 4 14:39:15.209: INFO: Pod kube-proxy-f59c8 requesting resource cpu=0m on Node latest-worker Feb 4 14:39:15.209: INFO: Pod kube-proxy-skm7x requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Feb 4 14:39:15.209: INFO: Creating a pod which consumes cpu=11042m on Node latest-worker Feb 4 14:39:15.217: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6d2a1ece-e02c-43cc-b86a-84577c691357.16609253cdb8d445], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4946/filler-pod-6d2a1ece-e02c-43cc-b86a-84577c691357 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6d2a1ece-e02c-43cc-b86a-84577c691357.166092542ce40f3f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6d2a1ece-e02c-43cc-b86a-84577c691357.16609254843433fa], Reason = [Created], Message = [Created container filler-pod-6d2a1ece-e02c-43cc-b86a-84577c691357] STEP: Considering event: Type = [Normal], Name = [filler-pod-6d2a1ece-e02c-43cc-b86a-84577c691357.16609254963302c3], Reason = [Started], Message = [Started container filler-pod-6d2a1ece-e02c-43cc-b86a-84577c691357] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc68e0e4-0b87-4894-8f22-b5797596f203.16609253ceef744d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4946/filler-pod-cc68e0e4-0b87-4894-8f22-b5797596f203 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc68e0e4-0b87-4894-8f22-b5797596f203.166092543da53e86], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc68e0e4-0b87-4894-8f22-b5797596f203.166092549a7f5216], Reason = [Created], Message = [Created container filler-pod-cc68e0e4-0b87-4894-8f22-b5797596f203] STEP: Considering event: Type = [Normal], Name = [filler-pod-cc68e0e4-0b87-4894-8f22-b5797596f203.16609254ab34d669], Reason = [Started], Message = [Started container filler-pod-cc68e0e4-0b87-4894-8f22-b5797596f203] STEP: Considering event: Type = [Warning], Name = [additional-pod.166092553609e1e9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Feb 4 14:39:22.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4946" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:7.406 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:640 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":311,"completed":311,"skipped":5268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb 4 14:39:22.373: INFO: Running AfterSuite actions on all nodes Feb 4 14:39:22.374: INFO: Running AfterSuite actions on node 1 Feb 4 14:39:22.374: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":311,"completed":311,"skipped":5329,"failed":0} Ran 311 of 5640 Specs in 8977.044 seconds SUCCESS! -- 311 Passed | 0 Failed | 0 Pending | 5329 Skipped PASS