I0824 23:20:15.862736 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0824 23:20:15.862957 7 e2e.go:129] Starting e2e run "3c9f0768-24c9-4b25-8296-6ddebf06d887" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598311214 - Will randomize all specs Will run 303 of 5237 specs Aug 24 23:20:15.916: INFO: >>> kubeConfig: /root/.kube/config Aug 24 23:20:15.920: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 24 23:20:15.939: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 24 23:20:15.974: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 24 23:20:15.974: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 24 23:20:15.974: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 24 23:20:15.980: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 24 23:20:15.980: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 24 23:20:15.980: INFO: e2e test version: v1.19.0-rc.4 Aug 24 23:20:15.981: INFO: kube-apiserver version: v1.19.0-rc.1 Aug 24 23:20:15.981: INFO: >>> kubeConfig: /root/.kube/config Aug 24 23:20:16.001: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:20:16.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Aug 24 23:20:16.178: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:20:17.223: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:20:19.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:20:21.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908017, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:20:24.350: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:20:24.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7047" for this suite. STEP: Destroying namespace "webhook-7047-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.671 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":1,"skipped":13,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:20:24.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1b4bd834-bec3-4b05-b112-b7a3934ad31a STEP: Creating a pod to test consume secrets Aug 24 23:20:24.894: INFO: Waiting up to 5m0s for pod "pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477" in namespace "secrets-5727" to be "Succeeded or Failed" Aug 24 23:20:24.898: INFO: Pod "pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477": Phase="Pending", Reason="", readiness=false. Elapsed: 4.355368ms Aug 24 23:20:26.963: INFO: Pod "pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069822735s Aug 24 23:20:29.013: INFO: Pod "pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119476381s STEP: Saw pod success Aug 24 23:20:29.013: INFO: Pod "pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477" satisfied condition "Succeeded or Failed" Aug 24 23:20:29.016: INFO: Trying to get logs from node latest-worker pod pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477 container secret-volume-test: STEP: delete the pod Aug 24 23:20:29.289: INFO: Waiting for pod pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477 to disappear Aug 24 23:20:29.377: INFO: Pod pod-secrets-9dd777eb-5ffd-4ca7-b4fc-4214d4447477 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:20:29.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5727" for this suite. STEP: Destroying namespace "secret-namespace-6049" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":28,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:20:29.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:20:29.549: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8a80cc11-38fd-4acc-85ab-93b5ee20e459" in namespace "security-context-test-6426" to be "Succeeded or Failed" Aug 24 23:20:29.587: INFO: Pod "busybox-user-65534-8a80cc11-38fd-4acc-85ab-93b5ee20e459": Phase="Pending", Reason="", readiness=false. Elapsed: 38.233271ms Aug 24 23:20:31.606: INFO: Pod "busybox-user-65534-8a80cc11-38fd-4acc-85ab-93b5ee20e459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056507843s Aug 24 23:20:33.683: INFO: Pod "busybox-user-65534-8a80cc11-38fd-4acc-85ab-93b5ee20e459": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134042525s Aug 24 23:20:35.834: INFO: Pod "busybox-user-65534-8a80cc11-38fd-4acc-85ab-93b5ee20e459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284492318s Aug 24 23:20:35.834: INFO: Pod "busybox-user-65534-8a80cc11-38fd-4acc-85ab-93b5ee20e459" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:20:35.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6426" for this suite. • [SLOW TEST:6.467 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a container with runAsUser /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:20:35.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:20:36.033: INFO: Creating deployment "test-recreate-deployment" Aug 24 23:20:36.126: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 24 23:20:36.157: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 24 23:20:38.217: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 24 23:20:38.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:20:40.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908036, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:20:42.246: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 24 23:20:42.255: INFO: Updating deployment test-recreate-deployment Aug 24 23:20:42.255: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 24 23:20:42.921: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9200 /apis/apps/v1/namespaces/deployment-9200/deployments/test-recreate-deployment d4839554-da22-4bbe-a788-62289dd58297 3408116 2 2020-08-24 23:20:36 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-24 23:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-24 23:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000d54d08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-24 23:20:42 +0000 UTC,LastTransitionTime:2020-08-24 23:20:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-08-24 23:20:42 +0000 UTC,LastTransitionTime:2020-08-24 23:20:36 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 24 23:20:42.943: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-9200 /apis/apps/v1/namespaces/deployment-9200/replicasets/test-recreate-deployment-f79dd4667 a6aa7d6e-1136-4a11-b499-537f03ce8562 3408114 1 2020-08-24 23:20:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d4839554-da22-4bbe-a788-62289dd58297 0xc000d55a40 0xc000d55a41}] [] [{kube-controller-manager Update apps/v1 2020-08-24 23:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4839554-da22-4bbe-a788-62289dd58297\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000d55c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:20:42.943: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 24 23:20:42.943: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-9200 /apis/apps/v1/namespaces/deployment-9200/replicasets/test-recreate-deployment-c96cf48f 2478d643-ef05-499a-a9bd-608832e8f914 3408105 2 2020-08-24 23:20:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d4839554-da22-4bbe-a788-62289dd58297 0xc000d557bf 0xc000d557f0}] [] [{kube-controller-manager Update apps/v1 2020-08-24 23:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d4839554-da22-4bbe-a788-62289dd58297\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000d558a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:20:42.959: INFO: Pod "test-recreate-deployment-f79dd4667-d48qw" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-d48qw test-recreate-deployment-f79dd4667- deployment-9200 /api/v1/namespaces/deployment-9200/pods/test-recreate-deployment-f79dd4667-d48qw 3836544c-5b67-425b-bd47-1308a8957ce7 3408117 0 2020-08-24 23:20:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 a6aa7d6e-1136-4a11-b499-537f03ce8562 0xc002ff1170 0xc002ff1171}] [] [{kube-controller-manager Update v1 2020-08-24 23:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6aa7d6e-1136-4a11-b499-537f03ce8562\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:20:42 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kbbf8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kbbf8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kbbf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:20:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-24 23:20:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:20:42.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9200" for this suite. • [SLOW TEST:7.079 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":4,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:20:42.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:20:43.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57" in namespace "downward-api-2794" to be "Succeeded or Failed" Aug 24 23:20:43.133: INFO: Pod "downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57": Phase="Pending", Reason="", readiness=false. Elapsed: 34.202113ms Aug 24 23:20:45.770: INFO: Pod "downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670679697s Aug 24 23:20:47.779: INFO: Pod "downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.679662091s Aug 24 23:20:49.823: INFO: Pod "downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.723642217s STEP: Saw pod success Aug 24 23:20:49.823: INFO: Pod "downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57" satisfied condition "Succeeded or Failed" Aug 24 23:20:49.840: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57 container client-container: STEP: delete the pod Aug 24 23:20:50.189: INFO: Waiting for pod downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57 to disappear Aug 24 23:20:50.211: INFO: Pod downwardapi-volume-ab45b296-5484-4694-8b2e-c85895333a57 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:20:50.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2794" for this suite. • [SLOW TEST:7.277 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":5,"skipped":162,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:20:50.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:20:50.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374" in namespace "downward-api-9575" to be "Succeeded or Failed" Aug 24 23:20:51.320: INFO: Pod "downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374": Phase="Pending", Reason="", readiness=false. Elapsed: 515.184404ms Aug 24 23:20:53.325: INFO: Pod "downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519345269s Aug 24 23:20:55.328: INFO: Pod "downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.522534862s STEP: Saw pod success Aug 24 23:20:55.328: INFO: Pod "downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374" satisfied condition "Succeeded or Failed" Aug 24 23:20:55.331: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374 container client-container: STEP: delete the pod Aug 24 23:20:55.374: INFO: Waiting for pod downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374 to disappear Aug 24 23:20:55.396: INFO: Pod downwardapi-volume-c80b0991-d358-475b-89bf-1691480dd374 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:20:55.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9575" for this suite. • [SLOW TEST:5.162 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:20:55.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 24 23:20:55.805: INFO: Waiting up to 1m0s for all nodes to be ready Aug 24 23:21:55.828: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 24 23:21:55.947: INFO: Created pod: pod0-sched-preemption-low-priority Aug 24 23:21:56.104: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:22:24.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8769" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:90.197 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":7,"skipped":189,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:22:25.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8725/configmap-test-454579fc-1b70-4fa1-b7e1-943afb12e662 STEP: Creating a pod to test consume configMaps Aug 24 23:22:26.202: INFO: Waiting up to 5m0s for pod "pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349" in namespace "configmap-8725" to be "Succeeded or Failed" Aug 24 23:22:26.209: INFO: Pod "pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349": Phase="Pending", Reason="", readiness=false. Elapsed: 6.876105ms Aug 24 23:22:28.278: INFO: Pod "pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075894143s Aug 24 23:22:30.284: INFO: Pod "pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082047765s Aug 24 23:22:32.338: INFO: Pod "pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136198026s STEP: Saw pod success Aug 24 23:22:32.338: INFO: Pod "pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349" satisfied condition "Succeeded or Failed" Aug 24 23:22:32.340: INFO: Trying to get logs from node latest-worker pod pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349 container env-test: STEP: delete the pod Aug 24 23:22:32.393: INFO: Waiting for pod pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349 to disappear Aug 24 23:22:32.423: INFO: Pod pod-configmaps-f46ad1e4-fd9f-444c-838b-30d8cbb8a349 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:22:32.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8725" for this suite. • [SLOW TEST:6.827 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":190,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:22:32.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ba89d1fd-4e29-42b0-a291-bfaabea8c5fe STEP: Creating a pod to test consume configMaps Aug 24 23:22:32.797: INFO: Waiting up to 5m0s for pod "pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b" in namespace "configmap-1029" to be "Succeeded or Failed" Aug 24 23:22:32.824: INFO: Pod "pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.149055ms Aug 24 23:22:34.831: INFO: Pod "pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033539982s Aug 24 23:22:37.032: INFO: Pod "pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234445894s Aug 24 23:22:39.057: INFO: Pod "pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.259541742s STEP: Saw pod success Aug 24 23:22:39.057: INFO: Pod "pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b" satisfied condition "Succeeded or Failed" Aug 24 23:22:39.060: INFO: Trying to get logs from node latest-worker pod pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b container configmap-volume-test: STEP: delete the pod Aug 24 23:22:39.852: INFO: Waiting for pod pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b to disappear Aug 24 23:22:39.898: INFO: Pod pod-configmaps-41723527-2542-41da-98c8-c39d5a70f62b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:22:39.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1029" for this suite. • [SLOW TEST:7.817 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":201,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:22:40.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 24 23:22:40.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6598' Aug 24 23:22:43.719: INFO: stderr: "" Aug 24 23:22:43.719: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 24 23:22:48.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6598 -o json' Aug 24 23:22:49.055: INFO: stderr: "" Aug 24 23:22:49.055: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-24T23:22:43Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-24T23:22:43Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.231\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-24T23:22:47Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6598\",\n \"resourceVersion\": \"3408943\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6598/pods/e2e-test-httpd-pod\",\n \"uid\": \"171ab49f-7fa9-48c7-8a88-e3fb4a4392a4\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rrxkh\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rrxkh\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rrxkh\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-24T23:22:43Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-24T23:22:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-24T23:22:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-24T23:22:43Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://9e80699d40b74305ddad095b2d6659b4eea4bdead4ab94c0ea9f224757173b85\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-24T23:22:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.11\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.231\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.231\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-24T23:22:43Z\"\n }\n}\n" STEP: replace the image in the pod Aug 24 23:22:49.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6598' Aug 24 23:22:49.793: INFO: stderr: "" Aug 24 23:22:49.793: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Aug 24 23:22:49.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6598' Aug 24 23:23:00.105: INFO: stderr: "" Aug 24 23:23:00.105: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:23:00.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6598" for this suite. • [SLOW TEST:19.882 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":10,"skipped":208,"failed":0} S ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:23:00.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 24 23:23:00.219: INFO: Created pod &Pod{ObjectMeta:{dns-7132 dns-7132 /api/v1/namespaces/dns-7132/pods/dns-7132 7e957b95-d1ec-44b5-b1ce-dc15b8368b57 3409031 0 2020-08-24 23:23:00 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-24 23:23:00 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-blcpc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-blcpc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-blcpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:23:00.222: INFO: The status of Pod dns-7132 is Pending, waiting for it to be Running (with Ready = true) Aug 24 23:23:02.233: INFO: The status of Pod dns-7132 is Pending, waiting for it to be Running (with Ready = true) Aug 24 23:23:04.226: INFO: The status of Pod dns-7132 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 24 23:23:04.226: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7132 PodName:dns-7132 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:23:04.226: INFO: >>> kubeConfig: /root/.kube/config I0824 23:23:04.261974 7 log.go:181] (0xc0000f0dc0) (0xc0024670e0) Create stream I0824 23:23:04.262014 7 log.go:181] (0xc0000f0dc0) (0xc0024670e0) Stream added, broadcasting: 1 I0824 23:23:04.264237 7 log.go:181] (0xc0000f0dc0) Reply frame received for 1 I0824 23:23:04.264276 7 log.go:181] (0xc0000f0dc0) (0xc001ef9900) Create stream I0824 23:23:04.264286 7 log.go:181] (0xc0000f0dc0) (0xc001ef9900) Stream added, broadcasting: 3 I0824 23:23:04.265405 7 log.go:181] (0xc0000f0dc0) Reply frame received for 3 I0824 23:23:04.265466 7 log.go:181] (0xc0000f0dc0) (0xc0000f2320) Create stream I0824 23:23:04.265494 7 log.go:181] (0xc0000f0dc0) (0xc0000f2320) Stream added, broadcasting: 5 I0824 23:23:04.266663 7 log.go:181] (0xc0000f0dc0) Reply frame received for 5 I0824 23:23:04.377495 7 log.go:181] (0xc0000f0dc0) Data frame received for 3 I0824 23:23:04.377534 7 log.go:181] (0xc001ef9900) (3) Data frame handling I0824 23:23:04.377563 7 log.go:181] (0xc001ef9900) (3) Data frame sent I0824 23:23:04.379425 7 log.go:181] (0xc0000f0dc0) Data frame received for 3 I0824 23:23:04.379449 7 log.go:181] (0xc001ef9900) (3) Data frame handling I0824 23:23:04.379472 7 log.go:181] (0xc0000f0dc0) Data frame received for 5 I0824 23:23:04.379499 7 log.go:181] (0xc0000f2320) (5) Data frame handling I0824 23:23:04.381129 7 log.go:181] (0xc0000f0dc0) Data frame received for 1 I0824 23:23:04.381146 7 log.go:181] (0xc0024670e0) (1) Data frame handling I0824 23:23:04.381158 7 log.go:181] (0xc0024670e0) (1) Data frame sent I0824 23:23:04.381228 7 log.go:181] (0xc0000f0dc0) (0xc0024670e0) Stream removed, broadcasting: 1 I0824 23:23:04.381306 7 log.go:181] (0xc0000f0dc0) Go away received I0824 23:23:04.381682 7 log.go:181] (0xc0000f0dc0) (0xc0024670e0) Stream removed, broadcasting: 1 I0824 23:23:04.381705 7 log.go:181] (0xc0000f0dc0) (0xc001ef9900) Stream removed, broadcasting: 3 I0824 23:23:04.381717 7 log.go:181] (0xc0000f0dc0) (0xc0000f2320) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 24 23:23:04.381: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7132 PodName:dns-7132 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:23:04.381: INFO: >>> kubeConfig: /root/.kube/config I0824 23:23:04.407934 7 log.go:181] (0xc000850dc0) (0xc001ef9cc0) Create stream I0824 23:23:04.407984 7 log.go:181] (0xc000850dc0) (0xc001ef9cc0) Stream added, broadcasting: 1 I0824 23:23:04.410562 7 log.go:181] (0xc000850dc0) Reply frame received for 1 I0824 23:23:04.410598 7 log.go:181] (0xc000850dc0) (0xc0000f2aa0) Create stream I0824 23:23:04.410612 7 log.go:181] (0xc000850dc0) (0xc0000f2aa0) Stream added, broadcasting: 3 I0824 23:23:04.411568 7 log.go:181] (0xc000850dc0) Reply frame received for 3 I0824 23:23:04.411617 7 log.go:181] (0xc000850dc0) (0xc002467180) Create stream I0824 23:23:04.411634 7 log.go:181] (0xc000850dc0) (0xc002467180) Stream added, broadcasting: 5 I0824 23:23:04.412506 7 log.go:181] (0xc000850dc0) Reply frame received for 5 I0824 23:23:04.512063 7 log.go:181] (0xc000850dc0) Data frame received for 3 I0824 23:23:04.512127 7 log.go:181] (0xc0000f2aa0) (3) Data frame handling I0824 23:23:04.512151 7 log.go:181] (0xc0000f2aa0) (3) Data frame sent I0824 23:23:04.512162 7 log.go:181] (0xc000850dc0) Data frame received for 3 I0824 23:23:04.512170 7 log.go:181] (0xc0000f2aa0) (3) Data frame handling I0824 23:23:04.512436 7 log.go:181] (0xc000850dc0) Data frame received for 5 I0824 23:23:04.512458 7 log.go:181] (0xc002467180) (5) Data frame handling I0824 23:23:04.514168 7 log.go:181] (0xc000850dc0) Data frame received for 1 I0824 23:23:04.514183 7 log.go:181] (0xc001ef9cc0) (1) Data frame handling I0824 23:23:04.514196 7 log.go:181] (0xc001ef9cc0) (1) Data frame sent I0824 23:23:04.514210 7 log.go:181] (0xc000850dc0) (0xc001ef9cc0) Stream removed, broadcasting: 1 I0824 23:23:04.514225 7 log.go:181] (0xc000850dc0) Go away received I0824 23:23:04.514317 7 log.go:181] (0xc000850dc0) (0xc001ef9cc0) Stream removed, broadcasting: 1 I0824 23:23:04.514335 7 log.go:181] (0xc000850dc0) (0xc0000f2aa0) Stream removed, broadcasting: 3 I0824 23:23:04.514341 7 log.go:181] (0xc000850dc0) (0xc002467180) Stream removed, broadcasting: 5 Aug 24 23:23:04.514: INFO: Deleting pod dns-7132... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:23:04.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7132" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":11,"skipped":209,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:23:04.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:23:11.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8395" for this suite. • [SLOW TEST:6.661 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":221,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:23:11.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7291 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 24 23:23:11.631: INFO: Found 0 stateful pods, waiting for 3 Aug 24 23:23:21.635: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:23:21.636: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:23:21.636: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 24 23:23:31.672: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:23:31.672: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:23:31.672: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 24 23:23:31.883: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 24 23:23:42.064: INFO: Updating stateful set ss2 Aug 24 23:23:42.614: INFO: Waiting for Pod statefulset-7291/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 24 23:23:53.640: INFO: Found 2 stateful pods, waiting for 3 Aug 24 23:24:03.644: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:24:03.644: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:24:03.644: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 24 23:24:03.664: INFO: Updating stateful set ss2 Aug 24 23:24:03.842: INFO: Waiting for Pod statefulset-7291/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 24 23:24:13.867: INFO: Updating stateful set ss2 Aug 24 23:24:13.935: INFO: Waiting for StatefulSet statefulset-7291/ss2 to complete update Aug 24 23:24:13.935: INFO: Waiting for Pod statefulset-7291/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 24 23:24:23.995: INFO: Deleting all statefulset in ns statefulset-7291 Aug 24 23:24:24.057: INFO: Scaling statefulset ss2 to 0 Aug 24 23:24:54.096: INFO: Waiting for statefulset status.replicas updated to 0 Aug 24 23:24:54.098: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:24:54.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7291" for this suite. • [SLOW TEST:102.665 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":13,"skipped":222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:24:54.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Aug 24 23:24:54.231: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix218388044/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:24:54.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7155" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":14,"skipped":247,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:24:54.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:24:55.761: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:24:57.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908295, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908296, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908295, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:25:00.818: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:25:01.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1324" for this suite. STEP: Destroying namespace "webhook-1324-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.985 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":15,"skipped":259,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:25:01.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-vk7t STEP: Creating a pod to test atomic-volume-subpath Aug 24 23:25:01.673: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vk7t" in namespace "subpath-7688" to be "Succeeded or Failed" Aug 24 23:25:01.699: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Pending", Reason="", readiness=false. Elapsed: 25.972197ms Aug 24 23:25:03.702: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029743793s Aug 24 23:25:05.706: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033813311s Aug 24 23:25:07.724: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 6.051817573s Aug 24 23:25:09.728: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 8.055127357s Aug 24 23:25:11.731: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 10.058759031s Aug 24 23:25:13.735: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 12.062929486s Aug 24 23:25:15.742: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 14.069363706s Aug 24 23:25:17.820: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 16.147251126s Aug 24 23:25:19.898: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 18.225457694s Aug 24 23:25:21.902: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 20.229139339s Aug 24 23:25:23.906: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 22.233122366s Aug 24 23:25:25.917: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Running", Reason="", readiness=true. Elapsed: 24.244413693s Aug 24 23:25:27.921: INFO: Pod "pod-subpath-test-configmap-vk7t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.248278143s STEP: Saw pod success Aug 24 23:25:27.921: INFO: Pod "pod-subpath-test-configmap-vk7t" satisfied condition "Succeeded or Failed" Aug 24 23:25:27.924: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-vk7t container test-container-subpath-configmap-vk7t: STEP: delete the pod Aug 24 23:25:27.984: INFO: Waiting for pod pod-subpath-test-configmap-vk7t to disappear Aug 24 23:25:27.991: INFO: Pod pod-subpath-test-configmap-vk7t no longer exists STEP: Deleting pod pod-subpath-test-configmap-vk7t Aug 24 23:25:27.991: INFO: Deleting pod "pod-subpath-test-configmap-vk7t" in namespace "subpath-7688" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:25:27.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7688" for this suite. • [SLOW TEST:26.699 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":16,"skipped":271,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:25:28.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 24 23:25:28.054: INFO: Waiting up to 5m0s for pod "pod-b43650a0-ef16-46d8-ad24-d3961c33fce9" in namespace "emptydir-5659" to be "Succeeded or Failed" Aug 24 23:25:28.070: INFO: Pod "pod-b43650a0-ef16-46d8-ad24-d3961c33fce9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.661592ms Aug 24 23:25:30.224: INFO: Pod "pod-b43650a0-ef16-46d8-ad24-d3961c33fce9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169235156s Aug 24 23:25:32.486: INFO: Pod "pod-b43650a0-ef16-46d8-ad24-d3961c33fce9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.431279324s Aug 24 23:25:34.490: INFO: Pod "pod-b43650a0-ef16-46d8-ad24-d3961c33fce9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.435873426s STEP: Saw pod success Aug 24 23:25:34.490: INFO: Pod "pod-b43650a0-ef16-46d8-ad24-d3961c33fce9" satisfied condition "Succeeded or Failed" Aug 24 23:25:34.493: INFO: Trying to get logs from node latest-worker pod pod-b43650a0-ef16-46d8-ad24-d3961c33fce9 container test-container: STEP: delete the pod Aug 24 23:25:34.745: INFO: Waiting for pod pod-b43650a0-ef16-46d8-ad24-d3961c33fce9 to disappear Aug 24 23:25:34.958: INFO: Pod pod-b43650a0-ef16-46d8-ad24-d3961c33fce9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:25:34.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5659" for this suite. • [SLOW TEST:7.029 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":280,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:25:35.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Aug 24 23:25:36.243: INFO: created pod pod-service-account-defaultsa Aug 24 23:25:36.243: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 24 23:25:36.324: INFO: created pod pod-service-account-mountsa Aug 24 23:25:36.324: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 24 23:25:36.337: INFO: created pod pod-service-account-nomountsa Aug 24 23:25:36.337: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 24 23:25:36.391: INFO: created pod pod-service-account-defaultsa-mountspec Aug 24 23:25:36.391: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 24 23:25:36.467: INFO: created pod pod-service-account-mountsa-mountspec Aug 24 23:25:36.468: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 24 23:25:36.516: INFO: created pod pod-service-account-nomountsa-mountspec Aug 24 23:25:36.517: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 24 23:25:36.565: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 24 23:25:36.565: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 24 23:25:36.635: INFO: created pod pod-service-account-mountsa-nomountspec Aug 24 23:25:36.635: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 24 23:25:36.648: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 24 23:25:36.648: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:25:36.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6414" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":18,"skipped":282,"failed":0} SSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:25:36.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 24 23:25:36.925: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 24 23:25:36.945: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 24 23:25:36.945: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 24 23:25:36.968: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 24 23:25:36.968: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 24 23:25:37.054: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 24 23:25:37.054: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 24 23:25:44.875: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:25:44.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-2232" for this suite. • [SLOW TEST:9.057 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":19,"skipped":287,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:25:45.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:25:47.882: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 24 23:25:53.062: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 24 23:25:59.313: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 24 23:26:01.379: INFO: Creating deployment "test-rollover-deployment" Aug 24 23:26:01.630: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 24 23:26:03.923: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 24 23:26:03.928: INFO: Ensure that both replica sets have 1 created replica Aug 24 23:26:04.659: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 24 23:26:04.668: INFO: Updating deployment test-rollover-deployment Aug 24 23:26:04.668: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 24 23:26:06.854: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 24 23:26:06.859: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 24 23:26:06.864: INFO: all replica sets need to contain the pod-template-hash label Aug 24 23:26:06.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908365, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:08.927: INFO: all replica sets need to contain the pod-template-hash label Aug 24 23:26:08.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908365, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:10.987: INFO: all replica sets need to contain the pod-template-hash label Aug 24 23:26:10.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908369, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:12.931: INFO: all replica sets need to contain the pod-template-hash label Aug 24 23:26:12.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908369, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:14.896: INFO: all replica sets need to contain the pod-template-hash label Aug 24 23:26:14.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908369, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:16.871: INFO: all replica sets need to contain the pod-template-hash label Aug 24 23:26:16.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908369, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:18.872: INFO: all replica sets need to contain the pod-template-hash label Aug 24 23:26:18.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908369, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:20.911: INFO: Aug 24 23:26:20.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908362, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908380, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908361, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:26:22.871: INFO: Aug 24 23:26:22.871: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 24 23:26:22.877: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9773 /apis/apps/v1/namespaces/deployment-9773/deployments/test-rollover-deployment 491b2b4b-0d5d-42c6-bdc1-97e431428ce6 3410479 2 2020-08-24 23:26:01 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-24 23:26:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-24 23:26:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b47dc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-24 23:26:02 +0000 UTC,LastTransitionTime:2020-08-24 23:26:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-08-24 23:26:20 +0000 UTC,LastTransitionTime:2020-08-24 23:26:01 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 24 23:26:22.880: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-9773 /apis/apps/v1/namespaces/deployment-9773/replicasets/test-rollover-deployment-5797c7764 298d51fc-f318-4cae-8de9-d8061efd3172 3410466 2 2020-08-24 23:26:04 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 491b2b4b-0d5d-42c6-bdc1-97e431428ce6 0xc002e31a30 0xc002e31a31}] [] [{kube-controller-manager Update apps/v1 2020-08-24 23:26:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"491b2b4b-0d5d-42c6-bdc1-97e431428ce6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e31aa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:26:22.880: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 24 23:26:22.880: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9773 /apis/apps/v1/namespaces/deployment-9773/replicasets/test-rollover-controller bd95cf83-1d66-4a4f-a721-67807a472791 3410478 2 2020-08-24 23:25:47 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 491b2b4b-0d5d-42c6-bdc1-97e431428ce6 0xc002e3191f 0xc002e31930}] [] [{e2e.test Update apps/v1 2020-08-24 23:25:47 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-24 23:26:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"491b2b4b-0d5d-42c6-bdc1-97e431428ce6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e319c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:26:22.880: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9773 /apis/apps/v1/namespaces/deployment-9773/replicasets/test-rollover-deployment-78bc8b888c 7f9675b4-d181-4f19-8c8d-3648eafb6e0e 3410376 2 2020-08-24 23:26:01 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 491b2b4b-0d5d-42c6-bdc1-97e431428ce6 0xc002e31b17 0xc002e31b18}] [] [{kube-controller-manager Update apps/v1 2020-08-24 23:26:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"491b2b4b-0d5d-42c6-bdc1-97e431428ce6\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002e31ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:26:22.883: INFO: Pod "test-rollover-deployment-5797c7764-ff2sf" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-ff2sf test-rollover-deployment-5797c7764- deployment-9773 /api/v1/namespaces/deployment-9773/pods/test-rollover-deployment-5797c7764-ff2sf 78f09ab5-77a5-4b4d-96cd-6b5dc7a21928 3410409 0 2020-08-24 23:26:05 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 298d51fc-f318-4cae-8de9-d8061efd3172 0xc000c2e2f0 0xc000c2e2f1}] [] [{kube-controller-manager Update v1 2020-08-24 23:26:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"298d51fc-f318-4cae-8de9-d8061efd3172\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:26:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.105\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q9hjb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q9hjb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q9hjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:26:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:26:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:26:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:26:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.105,StartTime:2020-08-24 23:26:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:26:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://2e7dbe6d5b50d146c269dc3ca3c1125aac523baaa987c212bc52e919cb4de6a5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:26:22.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9773" for this suite. • [SLOW TEST:37.043 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":20,"skipped":290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:26:22.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Aug 24 23:26:23.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config cluster-info' Aug 24 23:26:23.268: INFO: stderr: "" Aug 24 23:26:23.268: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45453/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:26:23.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2286" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":21,"skipped":381,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:26:23.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:26:23.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1159" for this suite. STEP: Destroying namespace "nspatchtest-57ac1bc0-7d72-44bb-87ff-d1ae52aa2371-4418" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":22,"skipped":399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:26:23.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 24 23:26:29.157: INFO: Successfully updated pod "pod-update-1422b3e0-b2fb-4d65-8efd-63c23a9e42e3" STEP: verifying the updated pod is in kubernetes Aug 24 23:26:29.494: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:26:29.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3644" for this suite. • [SLOW TEST:5.819 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":451,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:26:29.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 24 23:26:29.717: INFO: PodSpec: initContainers in spec.initContainers Aug 24 23:27:31.588: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9d8bab3d-2155-46ed-ba57-207490ac613e", GenerateName:"", Namespace:"init-container-742", SelfLink:"/api/v1/namespaces/init-container-742/pods/pod-init-9d8bab3d-2155-46ed-ba57-207490ac613e", UID:"3f22a548-9f6d-4f97-8de7-454c6fc4d68f", ResourceVersion:"3410851", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733908389, loc:(*time.Location)(0x7712980)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"717836778"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d21e60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d21e80)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002d21ea0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d21ec0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-b6qzp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002a0d100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b6qzp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b6qzp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b6qzp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b23018), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003a99260), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b230a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b230e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000b230e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000b230ec), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002dc5c40), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908389, loc:(*time.Location)(0x7712980)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908389, loc:(*time.Location)(0x7712980)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908389, loc:(*time.Location)(0x7712980)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908389, loc:(*time.Location)(0x7712980)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.14", PodIP:"10.244.1.106", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.106"}}, StartTime:(*v1.Time)(0xc002d21ee0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003a99340)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003a993b0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://234c8f441b3db73ee1ded6bc1eed04a3782f80368101a64dfb3946cdab142de9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d21f20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002d21f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc000b2316f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:27:31.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-742" for this suite. • [SLOW TEST:62.122 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":24,"skipped":463,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:27:31.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 24 23:27:31.767: INFO: Waiting up to 5m0s for pod "downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076" in namespace "downward-api-6972" to be "Succeeded or Failed" Aug 24 23:27:31.771: INFO: Pod "downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076": Phase="Pending", Reason="", readiness=false. Elapsed: 3.688874ms Aug 24 23:27:33.776: INFO: Pod "downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008113671s Aug 24 23:27:35.780: INFO: Pod "downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012289465s STEP: Saw pod success Aug 24 23:27:35.780: INFO: Pod "downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076" satisfied condition "Succeeded or Failed" Aug 24 23:27:35.783: INFO: Trying to get logs from node latest-worker pod downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076 container dapi-container: STEP: delete the pod Aug 24 23:27:36.284: INFO: Waiting for pod downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076 to disappear Aug 24 23:27:36.286: INFO: Pod downward-api-aa6e67a3-faa4-4031-a4c8-3198e9004076 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:27:36.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6972" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":470,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:27:36.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:27:36.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1384" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":26,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:27:37.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-28d2678b-fc88-4862-ac33-fbdd912faff1 in namespace container-probe-2549 Aug 24 23:27:41.494: INFO: Started pod liveness-28d2678b-fc88-4862-ac33-fbdd912faff1 in namespace container-probe-2549 STEP: checking the pod's current state and verifying that restartCount is present Aug 24 23:27:41.536: INFO: Initial restart count of pod liveness-28d2678b-fc88-4862-ac33-fbdd912faff1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:31:42.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2549" for this suite. • [SLOW TEST:245.878 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":504,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:31:42.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:31:43.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4025" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":28,"skipped":511,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:31:43.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 24 23:31:43.556: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:31:59.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5839" for this suite. • [SLOW TEST:16.300 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":29,"skipped":518,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:31:59.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 24 23:32:13.774: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:13.774: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:13.797721 7 log.go:181] (0xc0037f64d0) (0xc003118b40) Create stream I0824 23:32:13.797751 7 log.go:181] (0xc0037f64d0) (0xc003118b40) Stream added, broadcasting: 1 I0824 23:32:13.799843 7 log.go:181] (0xc0037f64d0) Reply frame received for 1 I0824 23:32:13.799871 7 log.go:181] (0xc0037f64d0) (0xc003118be0) Create stream I0824 23:32:13.799882 7 log.go:181] (0xc0037f64d0) (0xc003118be0) Stream added, broadcasting: 3 I0824 23:32:13.800859 7 log.go:181] (0xc0037f64d0) Reply frame received for 3 I0824 23:32:13.800884 7 log.go:181] (0xc0037f64d0) (0xc0016e3360) Create stream I0824 23:32:13.800894 7 log.go:181] (0xc0037f64d0) (0xc0016e3360) Stream added, broadcasting: 5 I0824 23:32:13.801627 7 log.go:181] (0xc0037f64d0) Reply frame received for 5 I0824 23:32:13.879624 7 log.go:181] (0xc0037f64d0) Data frame received for 5 I0824 23:32:13.879669 7 log.go:181] (0xc0016e3360) (5) Data frame handling I0824 23:32:13.879693 7 log.go:181] (0xc0037f64d0) Data frame received for 3 I0824 23:32:13.879711 7 log.go:181] (0xc003118be0) (3) Data frame handling I0824 23:32:13.879735 7 log.go:181] (0xc003118be0) (3) Data frame sent I0824 23:32:13.880308 7 log.go:181] (0xc0037f64d0) Data frame received for 3 I0824 23:32:13.880324 7 log.go:181] (0xc003118be0) (3) Data frame handling I0824 23:32:13.881705 7 log.go:181] (0xc0037f64d0) Data frame received for 1 I0824 23:32:13.881746 7 log.go:181] (0xc003118b40) (1) Data frame handling I0824 23:32:13.881775 7 log.go:181] (0xc003118b40) (1) Data frame sent I0824 23:32:13.881800 7 log.go:181] (0xc0037f64d0) (0xc003118b40) Stream removed, broadcasting: 1 I0824 23:32:13.881856 7 log.go:181] (0xc0037f64d0) Go away received I0824 23:32:13.881881 7 log.go:181] (0xc0037f64d0) (0xc003118b40) Stream removed, broadcasting: 1 I0824 23:32:13.881895 7 log.go:181] (0xc0037f64d0) (0xc003118be0) Stream removed, broadcasting: 3 I0824 23:32:13.881906 7 log.go:181] (0xc0037f64d0) (0xc0016e3360) Stream removed, broadcasting: 5 Aug 24 23:32:13.881: INFO: Exec stderr: "" Aug 24 23:32:13.881: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:13.881: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:13.915397 7 log.go:181] (0xc006ffc2c0) (0xc001ef99a0) Create stream I0824 23:32:13.915424 7 log.go:181] (0xc006ffc2c0) (0xc001ef99a0) Stream added, broadcasting: 1 I0824 23:32:13.917781 7 log.go:181] (0xc006ffc2c0) Reply frame received for 1 I0824 23:32:13.917814 7 log.go:181] (0xc006ffc2c0) (0xc0016e3400) Create stream I0824 23:32:13.917826 7 log.go:181] (0xc006ffc2c0) (0xc0016e3400) Stream added, broadcasting: 3 I0824 23:32:13.918556 7 log.go:181] (0xc006ffc2c0) Reply frame received for 3 I0824 23:32:13.918587 7 log.go:181] (0xc006ffc2c0) (0xc0016e34a0) Create stream I0824 23:32:13.918596 7 log.go:181] (0xc006ffc2c0) (0xc0016e34a0) Stream added, broadcasting: 5 I0824 23:32:13.919392 7 log.go:181] (0xc006ffc2c0) Reply frame received for 5 I0824 23:32:13.987125 7 log.go:181] (0xc006ffc2c0) Data frame received for 5 I0824 23:32:13.987163 7 log.go:181] (0xc0016e34a0) (5) Data frame handling I0824 23:32:13.987187 7 log.go:181] (0xc006ffc2c0) Data frame received for 3 I0824 23:32:13.987206 7 log.go:181] (0xc0016e3400) (3) Data frame handling I0824 23:32:13.987223 7 log.go:181] (0xc0016e3400) (3) Data frame sent I0824 23:32:13.987250 7 log.go:181] (0xc006ffc2c0) Data frame received for 3 I0824 23:32:13.987293 7 log.go:181] (0xc0016e3400) (3) Data frame handling I0824 23:32:13.988816 7 log.go:181] (0xc006ffc2c0) Data frame received for 1 I0824 23:32:13.988897 7 log.go:181] (0xc001ef99a0) (1) Data frame handling I0824 23:32:13.988921 7 log.go:181] (0xc001ef99a0) (1) Data frame sent I0824 23:32:13.988957 7 log.go:181] (0xc006ffc2c0) (0xc001ef99a0) Stream removed, broadcasting: 1 I0824 23:32:13.989020 7 log.go:181] (0xc006ffc2c0) Go away received I0824 23:32:13.989106 7 log.go:181] (0xc006ffc2c0) (0xc001ef99a0) Stream removed, broadcasting: 1 I0824 23:32:13.989132 7 log.go:181] (0xc006ffc2c0) (0xc0016e3400) Stream removed, broadcasting: 3 I0824 23:32:13.989151 7 log.go:181] (0xc006ffc2c0) (0xc0016e34a0) Stream removed, broadcasting: 5 Aug 24 23:32:13.989: INFO: Exec stderr: "" Aug 24 23:32:13.989: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:13.989: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.015096 7 log.go:181] (0xc0037f6dc0) (0xc003119040) Create stream I0824 23:32:14.015123 7 log.go:181] (0xc0037f6dc0) (0xc003119040) Stream added, broadcasting: 1 I0824 23:32:14.017145 7 log.go:181] (0xc0037f6dc0) Reply frame received for 1 I0824 23:32:14.017199 7 log.go:181] (0xc0037f6dc0) (0xc0031190e0) Create stream I0824 23:32:14.017220 7 log.go:181] (0xc0037f6dc0) (0xc0031190e0) Stream added, broadcasting: 3 I0824 23:32:14.018113 7 log.go:181] (0xc0037f6dc0) Reply frame received for 3 I0824 23:32:14.018149 7 log.go:181] (0xc0037f6dc0) (0xc0016e3540) Create stream I0824 23:32:14.018159 7 log.go:181] (0xc0037f6dc0) (0xc0016e3540) Stream added, broadcasting: 5 I0824 23:32:14.019042 7 log.go:181] (0xc0037f6dc0) Reply frame received for 5 I0824 23:32:14.097434 7 log.go:181] (0xc0037f6dc0) Data frame received for 5 I0824 23:32:14.097469 7 log.go:181] (0xc0016e3540) (5) Data frame handling I0824 23:32:14.097506 7 log.go:181] (0xc0037f6dc0) Data frame received for 3 I0824 23:32:14.097513 7 log.go:181] (0xc0031190e0) (3) Data frame handling I0824 23:32:14.097522 7 log.go:181] (0xc0031190e0) (3) Data frame sent I0824 23:32:14.097664 7 log.go:181] (0xc0037f6dc0) Data frame received for 3 I0824 23:32:14.097673 7 log.go:181] (0xc0031190e0) (3) Data frame handling I0824 23:32:14.100369 7 log.go:181] (0xc0037f6dc0) Data frame received for 1 I0824 23:32:14.100409 7 log.go:181] (0xc003119040) (1) Data frame handling I0824 23:32:14.100426 7 log.go:181] (0xc003119040) (1) Data frame sent I0824 23:32:14.100438 7 log.go:181] (0xc0037f6dc0) (0xc003119040) Stream removed, broadcasting: 1 I0824 23:32:14.100462 7 log.go:181] (0xc0037f6dc0) Go away received I0824 23:32:14.100520 7 log.go:181] (0xc0037f6dc0) (0xc003119040) Stream removed, broadcasting: 1 I0824 23:32:14.100534 7 log.go:181] (0xc0037f6dc0) (0xc0031190e0) Stream removed, broadcasting: 3 I0824 23:32:14.100541 7 log.go:181] (0xc0037f6dc0) (0xc0016e3540) Stream removed, broadcasting: 5 Aug 24 23:32:14.100: INFO: Exec stderr: "" Aug 24 23:32:14.100: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:14.100: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.188541 7 log.go:181] (0xc006ffc8f0) (0xc001ef9cc0) Create stream I0824 23:32:14.188567 7 log.go:181] (0xc006ffc8f0) (0xc001ef9cc0) Stream added, broadcasting: 1 I0824 23:32:14.190888 7 log.go:181] (0xc006ffc8f0) Reply frame received for 1 I0824 23:32:14.190937 7 log.go:181] (0xc006ffc8f0) (0xc003119180) Create stream I0824 23:32:14.190953 7 log.go:181] (0xc006ffc8f0) (0xc003119180) Stream added, broadcasting: 3 I0824 23:32:14.191824 7 log.go:181] (0xc006ffc8f0) Reply frame received for 3 I0824 23:32:14.191888 7 log.go:181] (0xc006ffc8f0) (0xc003119220) Create stream I0824 23:32:14.191909 7 log.go:181] (0xc006ffc8f0) (0xc003119220) Stream added, broadcasting: 5 I0824 23:32:14.192672 7 log.go:181] (0xc006ffc8f0) Reply frame received for 5 I0824 23:32:14.256491 7 log.go:181] (0xc006ffc8f0) Data frame received for 5 I0824 23:32:14.256547 7 log.go:181] (0xc003119220) (5) Data frame handling I0824 23:32:14.256579 7 log.go:181] (0xc006ffc8f0) Data frame received for 3 I0824 23:32:14.256587 7 log.go:181] (0xc003119180) (3) Data frame handling I0824 23:32:14.256596 7 log.go:181] (0xc003119180) (3) Data frame sent I0824 23:32:14.256603 7 log.go:181] (0xc006ffc8f0) Data frame received for 3 I0824 23:32:14.256611 7 log.go:181] (0xc003119180) (3) Data frame handling I0824 23:32:14.258181 7 log.go:181] (0xc006ffc8f0) Data frame received for 1 I0824 23:32:14.258211 7 log.go:181] (0xc001ef9cc0) (1) Data frame handling I0824 23:32:14.258224 7 log.go:181] (0xc001ef9cc0) (1) Data frame sent I0824 23:32:14.258236 7 log.go:181] (0xc006ffc8f0) (0xc001ef9cc0) Stream removed, broadcasting: 1 I0824 23:32:14.258266 7 log.go:181] (0xc006ffc8f0) Go away received I0824 23:32:14.258418 7 log.go:181] (0xc006ffc8f0) (0xc001ef9cc0) Stream removed, broadcasting: 1 I0824 23:32:14.258446 7 log.go:181] (0xc006ffc8f0) (0xc003119180) Stream removed, broadcasting: 3 I0824 23:32:14.258468 7 log.go:181] (0xc006ffc8f0) (0xc003119220) Stream removed, broadcasting: 5 Aug 24 23:32:14.258: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 24 23:32:14.258: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:14.258: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.283661 7 log.go:181] (0xc0037f7130) (0xc003119400) Create stream I0824 23:32:14.283690 7 log.go:181] (0xc0037f7130) (0xc003119400) Stream added, broadcasting: 1 I0824 23:32:14.288327 7 log.go:181] (0xc0037f7130) Reply frame received for 1 I0824 23:32:14.288353 7 log.go:181] (0xc0037f7130) (0xc001ef9e00) Create stream I0824 23:32:14.288359 7 log.go:181] (0xc0037f7130) (0xc001ef9e00) Stream added, broadcasting: 3 I0824 23:32:14.289429 7 log.go:181] (0xc0037f7130) Reply frame received for 3 I0824 23:32:14.289457 7 log.go:181] (0xc0037f7130) (0xc00148a960) Create stream I0824 23:32:14.289470 7 log.go:181] (0xc0037f7130) (0xc00148a960) Stream added, broadcasting: 5 I0824 23:32:14.291601 7 log.go:181] (0xc0037f7130) Reply frame received for 5 I0824 23:32:14.346081 7 log.go:181] (0xc0037f7130) Data frame received for 5 I0824 23:32:14.346136 7 log.go:181] (0xc00148a960) (5) Data frame handling I0824 23:32:14.346158 7 log.go:181] (0xc0037f7130) Data frame received for 3 I0824 23:32:14.346182 7 log.go:181] (0xc001ef9e00) (3) Data frame handling I0824 23:32:14.346208 7 log.go:181] (0xc001ef9e00) (3) Data frame sent I0824 23:32:14.346224 7 log.go:181] (0xc0037f7130) Data frame received for 3 I0824 23:32:14.346234 7 log.go:181] (0xc001ef9e00) (3) Data frame handling I0824 23:32:14.347547 7 log.go:181] (0xc0037f7130) Data frame received for 1 I0824 23:32:14.347568 7 log.go:181] (0xc003119400) (1) Data frame handling I0824 23:32:14.347578 7 log.go:181] (0xc003119400) (1) Data frame sent I0824 23:32:14.347591 7 log.go:181] (0xc0037f7130) (0xc003119400) Stream removed, broadcasting: 1 I0824 23:32:14.347624 7 log.go:181] (0xc0037f7130) Go away received I0824 23:32:14.347674 7 log.go:181] (0xc0037f7130) (0xc003119400) Stream removed, broadcasting: 1 I0824 23:32:14.347689 7 log.go:181] (0xc0037f7130) (0xc001ef9e00) Stream removed, broadcasting: 3 I0824 23:32:14.347697 7 log.go:181] (0xc0037f7130) (0xc00148a960) Stream removed, broadcasting: 5 Aug 24 23:32:14.347: INFO: Exec stderr: "" Aug 24 23:32:14.347: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:14.347: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.408170 7 log.go:181] (0xc0039b4370) (0xc00148ad20) Create stream I0824 23:32:14.408205 7 log.go:181] (0xc0039b4370) (0xc00148ad20) Stream added, broadcasting: 1 I0824 23:32:14.410166 7 log.go:181] (0xc0039b4370) Reply frame received for 1 I0824 23:32:14.410197 7 log.go:181] (0xc0039b4370) (0xc00148adc0) Create stream I0824 23:32:14.410208 7 log.go:181] (0xc0039b4370) (0xc00148adc0) Stream added, broadcasting: 3 I0824 23:32:14.411033 7 log.go:181] (0xc0039b4370) Reply frame received for 3 I0824 23:32:14.411062 7 log.go:181] (0xc0039b4370) (0xc0016e35e0) Create stream I0824 23:32:14.411073 7 log.go:181] (0xc0039b4370) (0xc0016e35e0) Stream added, broadcasting: 5 I0824 23:32:14.411846 7 log.go:181] (0xc0039b4370) Reply frame received for 5 I0824 23:32:14.468967 7 log.go:181] (0xc0039b4370) Data frame received for 5 I0824 23:32:14.469005 7 log.go:181] (0xc0016e35e0) (5) Data frame handling I0824 23:32:14.469027 7 log.go:181] (0xc0039b4370) Data frame received for 3 I0824 23:32:14.469039 7 log.go:181] (0xc00148adc0) (3) Data frame handling I0824 23:32:14.469053 7 log.go:181] (0xc00148adc0) (3) Data frame sent I0824 23:32:14.469067 7 log.go:181] (0xc0039b4370) Data frame received for 3 I0824 23:32:14.469092 7 log.go:181] (0xc00148adc0) (3) Data frame handling I0824 23:32:14.469980 7 log.go:181] (0xc0039b4370) Data frame received for 1 I0824 23:32:14.470009 7 log.go:181] (0xc00148ad20) (1) Data frame handling I0824 23:32:14.470030 7 log.go:181] (0xc00148ad20) (1) Data frame sent I0824 23:32:14.470045 7 log.go:181] (0xc0039b4370) (0xc00148ad20) Stream removed, broadcasting: 1 I0824 23:32:14.470062 7 log.go:181] (0xc0039b4370) Go away received I0824 23:32:14.470132 7 log.go:181] (0xc0039b4370) (0xc00148ad20) Stream removed, broadcasting: 1 I0824 23:32:14.470149 7 log.go:181] (0xc0039b4370) (0xc00148adc0) Stream removed, broadcasting: 3 I0824 23:32:14.470161 7 log.go:181] (0xc0039b4370) (0xc0016e35e0) Stream removed, broadcasting: 5 Aug 24 23:32:14.470: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 24 23:32:14.470: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:14.470: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.505811 7 log.go:181] (0xc0037f7760) (0xc003119680) Create stream I0824 23:32:14.505838 7 log.go:181] (0xc0037f7760) (0xc003119680) Stream added, broadcasting: 1 I0824 23:32:14.511412 7 log.go:181] (0xc0037f7760) Reply frame received for 1 I0824 23:32:14.511459 7 log.go:181] (0xc0037f7760) (0xc00148ae60) Create stream I0824 23:32:14.511468 7 log.go:181] (0xc0037f7760) (0xc00148ae60) Stream added, broadcasting: 3 I0824 23:32:14.512386 7 log.go:181] (0xc0037f7760) Reply frame received for 3 I0824 23:32:14.512428 7 log.go:181] (0xc0037f7760) (0xc000a30a00) Create stream I0824 23:32:14.512451 7 log.go:181] (0xc0037f7760) (0xc000a30a00) Stream added, broadcasting: 5 I0824 23:32:14.513792 7 log.go:181] (0xc0037f7760) Reply frame received for 5 I0824 23:32:14.579199 7 log.go:181] (0xc0037f7760) Data frame received for 5 I0824 23:32:14.579237 7 log.go:181] (0xc000a30a00) (5) Data frame handling I0824 23:32:14.579264 7 log.go:181] (0xc0037f7760) Data frame received for 3 I0824 23:32:14.579276 7 log.go:181] (0xc00148ae60) (3) Data frame handling I0824 23:32:14.579288 7 log.go:181] (0xc00148ae60) (3) Data frame sent I0824 23:32:14.579303 7 log.go:181] (0xc0037f7760) Data frame received for 3 I0824 23:32:14.579323 7 log.go:181] (0xc00148ae60) (3) Data frame handling I0824 23:32:14.580704 7 log.go:181] (0xc0037f7760) Data frame received for 1 I0824 23:32:14.580818 7 log.go:181] (0xc003119680) (1) Data frame handling I0824 23:32:14.580834 7 log.go:181] (0xc003119680) (1) Data frame sent I0824 23:32:14.580845 7 log.go:181] (0xc0037f7760) (0xc003119680) Stream removed, broadcasting: 1 I0824 23:32:14.580914 7 log.go:181] (0xc0037f7760) (0xc003119680) Stream removed, broadcasting: 1 I0824 23:32:14.580924 7 log.go:181] (0xc0037f7760) (0xc00148ae60) Stream removed, broadcasting: 3 I0824 23:32:14.580931 7 log.go:181] (0xc0037f7760) (0xc000a30a00) Stream removed, broadcasting: 5 Aug 24 23:32:14.580: INFO: Exec stderr: "" Aug 24 23:32:14.580: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:14.580: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.581069 7 log.go:181] (0xc0037f7760) Go away received I0824 23:32:14.649349 7 log.go:181] (0xc0039b49a0) (0xc00148b0e0) Create stream I0824 23:32:14.649383 7 log.go:181] (0xc0039b49a0) (0xc00148b0e0) Stream added, broadcasting: 1 I0824 23:32:14.651513 7 log.go:181] (0xc0039b49a0) Reply frame received for 1 I0824 23:32:14.651540 7 log.go:181] (0xc0039b49a0) (0xc003119720) Create stream I0824 23:32:14.651547 7 log.go:181] (0xc0039b49a0) (0xc003119720) Stream added, broadcasting: 3 I0824 23:32:14.652354 7 log.go:181] (0xc0039b49a0) Reply frame received for 3 I0824 23:32:14.652380 7 log.go:181] (0xc0039b49a0) (0xc000a30fa0) Create stream I0824 23:32:14.652390 7 log.go:181] (0xc0039b49a0) (0xc000a30fa0) Stream added, broadcasting: 5 I0824 23:32:14.653189 7 log.go:181] (0xc0039b49a0) Reply frame received for 5 I0824 23:32:14.729378 7 log.go:181] (0xc0039b49a0) Data frame received for 5 I0824 23:32:14.729430 7 log.go:181] (0xc000a30fa0) (5) Data frame handling I0824 23:32:14.729463 7 log.go:181] (0xc0039b49a0) Data frame received for 3 I0824 23:32:14.729498 7 log.go:181] (0xc003119720) (3) Data frame handling I0824 23:32:14.729533 7 log.go:181] (0xc003119720) (3) Data frame sent I0824 23:32:14.729549 7 log.go:181] (0xc0039b49a0) Data frame received for 3 I0824 23:32:14.729561 7 log.go:181] (0xc003119720) (3) Data frame handling I0824 23:32:14.731320 7 log.go:181] (0xc0039b49a0) Data frame received for 1 I0824 23:32:14.731364 7 log.go:181] (0xc00148b0e0) (1) Data frame handling I0824 23:32:14.731385 7 log.go:181] (0xc00148b0e0) (1) Data frame sent I0824 23:32:14.731408 7 log.go:181] (0xc0039b49a0) (0xc00148b0e0) Stream removed, broadcasting: 1 I0824 23:32:14.731531 7 log.go:181] (0xc0039b49a0) (0xc00148b0e0) Stream removed, broadcasting: 1 I0824 23:32:14.731579 7 log.go:181] (0xc0039b49a0) (0xc003119720) Stream removed, broadcasting: 3 I0824 23:32:14.731611 7 log.go:181] (0xc0039b49a0) (0xc000a30fa0) Stream removed, broadcasting: 5 Aug 24 23:32:14.731: INFO: Exec stderr: "" Aug 24 23:32:14.731: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:14.731: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.731718 7 log.go:181] (0xc0039b49a0) Go away received I0824 23:32:14.758882 7 log.go:181] (0xc007c6a370) (0xc0033abae0) Create stream I0824 23:32:14.758910 7 log.go:181] (0xc007c6a370) (0xc0033abae0) Stream added, broadcasting: 1 I0824 23:32:14.766954 7 log.go:181] (0xc007c6a370) Reply frame received for 1 I0824 23:32:14.767004 7 log.go:181] (0xc007c6a370) (0xc001ef9ea0) Create stream I0824 23:32:14.767040 7 log.go:181] (0xc007c6a370) (0xc001ef9ea0) Stream added, broadcasting: 3 I0824 23:32:14.767832 7 log.go:181] (0xc007c6a370) Reply frame received for 3 I0824 23:32:14.767866 7 log.go:181] (0xc007c6a370) (0xc00148b180) Create stream I0824 23:32:14.767875 7 log.go:181] (0xc007c6a370) (0xc00148b180) Stream added, broadcasting: 5 I0824 23:32:14.768505 7 log.go:181] (0xc007c6a370) Reply frame received for 5 I0824 23:32:14.826845 7 log.go:181] (0xc007c6a370) Data frame received for 5 I0824 23:32:14.826887 7 log.go:181] (0xc00148b180) (5) Data frame handling I0824 23:32:14.826945 7 log.go:181] (0xc007c6a370) Data frame received for 3 I0824 23:32:14.826986 7 log.go:181] (0xc001ef9ea0) (3) Data frame handling I0824 23:32:14.827024 7 log.go:181] (0xc001ef9ea0) (3) Data frame sent I0824 23:32:14.827048 7 log.go:181] (0xc007c6a370) Data frame received for 3 I0824 23:32:14.827070 7 log.go:181] (0xc001ef9ea0) (3) Data frame handling I0824 23:32:14.828471 7 log.go:181] (0xc007c6a370) Data frame received for 1 I0824 23:32:14.828509 7 log.go:181] (0xc0033abae0) (1) Data frame handling I0824 23:32:14.828547 7 log.go:181] (0xc0033abae0) (1) Data frame sent I0824 23:32:14.828571 7 log.go:181] (0xc007c6a370) (0xc0033abae0) Stream removed, broadcasting: 1 I0824 23:32:14.828596 7 log.go:181] (0xc007c6a370) Go away received I0824 23:32:14.828831 7 log.go:181] (0xc007c6a370) (0xc0033abae0) Stream removed, broadcasting: 1 I0824 23:32:14.828856 7 log.go:181] (0xc007c6a370) (0xc001ef9ea0) Stream removed, broadcasting: 3 I0824 23:32:14.828874 7 log.go:181] (0xc007c6a370) (0xc00148b180) Stream removed, broadcasting: 5 Aug 24 23:32:14.828: INFO: Exec stderr: "" Aug 24 23:32:14.828: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-175 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:32:14.828: INFO: >>> kubeConfig: /root/.kube/config I0824 23:32:14.855586 7 log.go:181] (0xc0037f7d90) (0xc0031199a0) Create stream I0824 23:32:14.855626 7 log.go:181] (0xc0037f7d90) (0xc0031199a0) Stream added, broadcasting: 1 I0824 23:32:14.858090 7 log.go:181] (0xc0037f7d90) Reply frame received for 1 I0824 23:32:14.858143 7 log.go:181] (0xc0037f7d90) (0xc003119ae0) Create stream I0824 23:32:14.858168 7 log.go:181] (0xc0037f7d90) (0xc003119ae0) Stream added, broadcasting: 3 I0824 23:32:14.859047 7 log.go:181] (0xc0037f7d90) Reply frame received for 3 I0824 23:32:14.859089 7 log.go:181] (0xc0037f7d90) (0xc003119c20) Create stream I0824 23:32:14.859104 7 log.go:181] (0xc0037f7d90) (0xc003119c20) Stream added, broadcasting: 5 I0824 23:32:14.859914 7 log.go:181] (0xc0037f7d90) Reply frame received for 5 I0824 23:32:14.935394 7 log.go:181] (0xc0037f7d90) Data frame received for 5 I0824 23:32:14.935445 7 log.go:181] (0xc003119c20) (5) Data frame handling I0824 23:32:14.935487 7 log.go:181] (0xc0037f7d90) Data frame received for 3 I0824 23:32:14.935501 7 log.go:181] (0xc003119ae0) (3) Data frame handling I0824 23:32:14.935523 7 log.go:181] (0xc003119ae0) (3) Data frame sent I0824 23:32:14.935536 7 log.go:181] (0xc0037f7d90) Data frame received for 3 I0824 23:32:14.935547 7 log.go:181] (0xc003119ae0) (3) Data frame handling I0824 23:32:14.937173 7 log.go:181] (0xc0037f7d90) Data frame received for 1 I0824 23:32:14.937275 7 log.go:181] (0xc0031199a0) (1) Data frame handling I0824 23:32:14.937307 7 log.go:181] (0xc0031199a0) (1) Data frame sent I0824 23:32:14.937320 7 log.go:181] (0xc0037f7d90) (0xc0031199a0) Stream removed, broadcasting: 1 I0824 23:32:14.937473 7 log.go:181] (0xc0037f7d90) (0xc0031199a0) Stream removed, broadcasting: 1 I0824 23:32:14.937508 7 log.go:181] (0xc0037f7d90) (0xc003119ae0) Stream removed, broadcasting: 3 I0824 23:32:14.937558 7 log.go:181] (0xc0037f7d90) Go away received I0824 23:32:14.937673 7 log.go:181] (0xc0037f7d90) (0xc003119c20) Stream removed, broadcasting: 5 Aug 24 23:32:14.937: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:32:14.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-175" for this suite. • [SLOW TEST:15.341 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:32:14.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-w7m6 STEP: Creating a pod to test atomic-volume-subpath Aug 24 23:32:15.037: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-w7m6" in namespace "subpath-5466" to be "Succeeded or Failed" Aug 24 23:32:15.060: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.13308ms Aug 24 23:32:17.064: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026517014s Aug 24 23:32:19.068: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031083132s Aug 24 23:32:21.072: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 6.034741448s Aug 24 23:32:23.077: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 8.03931397s Aug 24 23:32:25.081: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 10.04349692s Aug 24 23:32:27.084: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 12.046952939s Aug 24 23:32:29.089: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 14.051320012s Aug 24 23:32:31.093: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 16.055365377s Aug 24 23:32:33.097: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 18.059447239s Aug 24 23:32:35.140: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 20.102245631s Aug 24 23:32:37.144: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 22.106726001s Aug 24 23:32:39.148: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Running", Reason="", readiness=true. Elapsed: 24.110463715s Aug 24 23:32:41.158: INFO: Pod "pod-subpath-test-secret-w7m6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.120548737s STEP: Saw pod success Aug 24 23:32:41.158: INFO: Pod "pod-subpath-test-secret-w7m6" satisfied condition "Succeeded or Failed" Aug 24 23:32:41.160: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-w7m6 container test-container-subpath-secret-w7m6: STEP: delete the pod Aug 24 23:32:41.204: INFO: Waiting for pod pod-subpath-test-secret-w7m6 to disappear Aug 24 23:32:41.225: INFO: Pod pod-subpath-test-secret-w7m6 no longer exists STEP: Deleting pod pod-subpath-test-secret-w7m6 Aug 24 23:32:41.226: INFO: Deleting pod "pod-subpath-test-secret-w7m6" in namespace "subpath-5466" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:32:41.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5466" for this suite. • [SLOW TEST:26.287 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":31,"skipped":549,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:32:41.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 24 23:32:41.371: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:41.415: INFO: Number of nodes with available pods: 0 Aug 24 23:32:41.415: INFO: Node latest-worker is running more than one daemon pod Aug 24 23:32:42.420: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:42.424: INFO: Number of nodes with available pods: 0 Aug 24 23:32:42.424: INFO: Node latest-worker is running more than one daemon pod Aug 24 23:32:43.602: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:43.605: INFO: Number of nodes with available pods: 0 Aug 24 23:32:43.605: INFO: Node latest-worker is running more than one daemon pod Aug 24 23:32:44.420: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:44.424: INFO: Number of nodes with available pods: 0 Aug 24 23:32:44.424: INFO: Node latest-worker is running more than one daemon pod Aug 24 23:32:45.419: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:45.423: INFO: Number of nodes with available pods: 1 Aug 24 23:32:45.423: INFO: Node latest-worker is running more than one daemon pod Aug 24 23:32:46.419: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:46.423: INFO: Number of nodes with available pods: 2 Aug 24 23:32:46.423: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 24 23:32:46.520: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:46.542: INFO: Number of nodes with available pods: 1 Aug 24 23:32:46.542: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:47.547: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:47.550: INFO: Number of nodes with available pods: 1 Aug 24 23:32:47.550: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:48.547: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:48.551: INFO: Number of nodes with available pods: 1 Aug 24 23:32:48.551: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:49.547: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:49.550: INFO: Number of nodes with available pods: 1 Aug 24 23:32:49.550: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:50.686: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:50.731: INFO: Number of nodes with available pods: 1 Aug 24 23:32:50.731: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:51.548: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:51.551: INFO: Number of nodes with available pods: 1 Aug 24 23:32:51.551: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:52.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:52.956: INFO: Number of nodes with available pods: 1 Aug 24 23:32:52.956: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:53.680: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:53.686: INFO: Number of nodes with available pods: 1 Aug 24 23:32:53.686: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:54.608: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:54.646: INFO: Number of nodes with available pods: 1 Aug 24 23:32:54.646: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:55.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:55.574: INFO: Number of nodes with available pods: 1 Aug 24 23:32:55.575: INFO: Node latest-worker2 is running more than one daemon pod Aug 24 23:32:56.546: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 24 23:32:56.548: INFO: Number of nodes with available pods: 2 Aug 24 23:32:56.548: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3383, will wait for the garbage collector to delete the pods Aug 24 23:32:56.609: INFO: Deleting DaemonSet.extensions daemon-set took: 6.51322ms Aug 24 23:32:57.009: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.233949ms Aug 24 23:33:10.126: INFO: Number of nodes with available pods: 0 Aug 24 23:33:10.126: INFO: Number of running nodes: 0, number of available pods: 0 Aug 24 23:33:10.132: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3383/daemonsets","resourceVersion":"3412626"},"items":null} Aug 24 23:33:10.142: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3383/pods","resourceVersion":"3412627"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:33:10.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3383" for this suite. • [SLOW TEST:28.927 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":32,"skipped":555,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:33:10.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 24 23:33:10.247: INFO: Waiting up to 5m0s for pod "pod-958e58f8-0288-4f56-8eea-091e2e084915" in namespace "emptydir-5202" to be "Succeeded or Failed" Aug 24 23:33:10.250: INFO: Pod "pod-958e58f8-0288-4f56-8eea-091e2e084915": Phase="Pending", Reason="", readiness=false. Elapsed: 3.203691ms Aug 24 23:33:12.254: INFO: Pod "pod-958e58f8-0288-4f56-8eea-091e2e084915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007202349s Aug 24 23:33:14.258: INFO: Pod "pod-958e58f8-0288-4f56-8eea-091e2e084915": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011175123s Aug 24 23:33:16.278: INFO: Pod "pod-958e58f8-0288-4f56-8eea-091e2e084915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030993286s STEP: Saw pod success Aug 24 23:33:16.278: INFO: Pod "pod-958e58f8-0288-4f56-8eea-091e2e084915" satisfied condition "Succeeded or Failed" Aug 24 23:33:16.281: INFO: Trying to get logs from node latest-worker pod pod-958e58f8-0288-4f56-8eea-091e2e084915 container test-container: STEP: delete the pod Aug 24 23:33:16.312: INFO: Waiting for pod pod-958e58f8-0288-4f56-8eea-091e2e084915 to disappear Aug 24 23:33:16.322: INFO: Pod pod-958e58f8-0288-4f56-8eea-091e2e084915 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:33:16.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5202" for this suite. • [SLOW TEST:6.168 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":33,"skipped":557,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:33:16.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 24 23:33:16.406: INFO: Waiting up to 5m0s for pod "pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82" in namespace "emptydir-4932" to be "Succeeded or Failed" Aug 24 23:33:16.409: INFO: Pod "pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353894ms Aug 24 23:33:18.542: INFO: Pod "pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135338713s Aug 24 23:33:20.545: INFO: Pod "pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138640995s STEP: Saw pod success Aug 24 23:33:20.545: INFO: Pod "pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82" satisfied condition "Succeeded or Failed" Aug 24 23:33:20.548: INFO: Trying to get logs from node latest-worker pod pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82 container test-container: STEP: delete the pod Aug 24 23:33:20.686: INFO: Waiting for pod pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82 to disappear Aug 24 23:33:20.719: INFO: Pod pod-1e000263-0dbb-4a83-afbb-13ad2f34fe82 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:33:20.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4932" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:33:20.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:33:20.810: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2304 I0824 23:33:20.824073 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2304, replica count: 1 I0824 23:33:21.874482 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:33:22.874691 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:33:23.874907 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:33:24.875157 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 24 23:33:25.036: INFO: Created: latency-svc-s4w78 Aug 24 23:33:25.123: INFO: Got endpoints: latency-svc-s4w78 [148.447851ms] Aug 24 23:33:25.365: INFO: Created: latency-svc-ghl8c Aug 24 23:33:25.414: INFO: Got endpoints: latency-svc-ghl8c [290.716966ms] Aug 24 23:33:25.513: INFO: Created: latency-svc-8jzgk Aug 24 23:33:25.531: INFO: Got endpoints: latency-svc-8jzgk [407.261934ms] Aug 24 23:33:25.735: INFO: Created: latency-svc-4w5kv Aug 24 23:33:25.811: INFO: Got endpoints: latency-svc-4w5kv [687.177601ms] Aug 24 23:33:26.021: INFO: Created: latency-svc-b46pm Aug 24 23:33:26.104: INFO: Got endpoints: latency-svc-b46pm [979.987533ms] Aug 24 23:33:26.429: INFO: Created: latency-svc-ncnzt Aug 24 23:33:26.433: INFO: Got endpoints: latency-svc-ncnzt [1.309282816s] Aug 24 23:33:26.670: INFO: Created: latency-svc-89f56 Aug 24 23:33:26.951: INFO: Got endpoints: latency-svc-89f56 [1.827521034s] Aug 24 23:33:27.030: INFO: Created: latency-svc-rw5bm Aug 24 23:33:27.167: INFO: Got endpoints: latency-svc-rw5bm [2.043218732s] Aug 24 23:33:27.235: INFO: Created: latency-svc-8ll6r Aug 24 23:33:27.261: INFO: Got endpoints: latency-svc-8ll6r [2.137145963s] Aug 24 23:33:27.439: INFO: Created: latency-svc-gdkmq Aug 24 23:33:27.632: INFO: Got endpoints: latency-svc-gdkmq [2.508132185s] Aug 24 23:33:27.654: INFO: Created: latency-svc-jlk67 Aug 24 23:33:27.720: INFO: Got endpoints: latency-svc-jlk67 [2.59609472s] Aug 24 23:33:27.795: INFO: Created: latency-svc-5qd9f Aug 24 23:33:27.813: INFO: Got endpoints: latency-svc-5qd9f [2.688957817s] Aug 24 23:33:27.833: INFO: Created: latency-svc-rkg5t Aug 24 23:33:27.842: INFO: Got endpoints: latency-svc-rkg5t [2.71884533s] Aug 24 23:33:27.863: INFO: Created: latency-svc-nzd4z Aug 24 23:33:27.955: INFO: Got endpoints: latency-svc-nzd4z [2.831180217s] Aug 24 23:33:27.958: INFO: Created: latency-svc-qzk52 Aug 24 23:33:27.969: INFO: Got endpoints: latency-svc-qzk52 [2.844951109s] Aug 24 23:33:28.106: INFO: Created: latency-svc-2jxrq Aug 24 23:33:28.114: INFO: Got endpoints: latency-svc-2jxrq [2.990453492s] Aug 24 23:33:28.189: INFO: Created: latency-svc-9hrhc Aug 24 23:33:28.285: INFO: Got endpoints: latency-svc-9hrhc [2.870565365s] Aug 24 23:33:28.334: INFO: Created: latency-svc-hzr9x Aug 24 23:33:28.348: INFO: Got endpoints: latency-svc-hzr9x [2.817378808s] Aug 24 23:33:28.375: INFO: Created: latency-svc-qbpzr Aug 24 23:33:28.416: INFO: Got endpoints: latency-svc-qbpzr [2.605664004s] Aug 24 23:33:28.435: INFO: Created: latency-svc-56nxz Aug 24 23:33:28.470: INFO: Got endpoints: latency-svc-56nxz [2.36605779s] Aug 24 23:33:28.573: INFO: Created: latency-svc-zjc4d Aug 24 23:33:28.582: INFO: Got endpoints: latency-svc-zjc4d [2.149112783s] Aug 24 23:33:28.613: INFO: Created: latency-svc-6kdfx Aug 24 23:33:28.624: INFO: Got endpoints: latency-svc-6kdfx [1.673241032s] Aug 24 23:33:28.645: INFO: Created: latency-svc-75rmq Aug 24 23:33:28.728: INFO: Got endpoints: latency-svc-75rmq [1.56045511s] Aug 24 23:33:28.740: INFO: Created: latency-svc-snj29 Aug 24 23:33:28.764: INFO: Got endpoints: latency-svc-snj29 [1.503663429s] Aug 24 23:33:28.853: INFO: Created: latency-svc-j8xdh Aug 24 23:33:28.859: INFO: Got endpoints: latency-svc-j8xdh [1.226890383s] Aug 24 23:33:28.909: INFO: Created: latency-svc-8jc87 Aug 24 23:33:28.931: INFO: Got endpoints: latency-svc-8jc87 [1.211551524s] Aug 24 23:33:29.016: INFO: Created: latency-svc-7fkjm Aug 24 23:33:29.034: INFO: Got endpoints: latency-svc-7fkjm [1.220691269s] Aug 24 23:33:29.083: INFO: Created: latency-svc-2dtwh Aug 24 23:33:29.106: INFO: Got endpoints: latency-svc-2dtwh [1.263651103s] Aug 24 23:33:29.225: INFO: Created: latency-svc-nk5qn Aug 24 23:33:29.249: INFO: Got endpoints: latency-svc-nk5qn [1.294221888s] Aug 24 23:33:29.299: INFO: Created: latency-svc-m6wnz Aug 24 23:33:29.369: INFO: Got endpoints: latency-svc-m6wnz [1.399692419s] Aug 24 23:33:29.400: INFO: Created: latency-svc-rdjtc Aug 24 23:33:29.435: INFO: Got endpoints: latency-svc-rdjtc [1.320940561s] Aug 24 23:33:29.459: INFO: Created: latency-svc-xgb5r Aug 24 23:33:29.548: INFO: Got endpoints: latency-svc-xgb5r [1.262836735s] Aug 24 23:33:29.581: INFO: Created: latency-svc-jbspf Aug 24 23:33:29.592: INFO: Got endpoints: latency-svc-jbspf [1.243328765s] Aug 24 23:33:29.639: INFO: Created: latency-svc-4pk2f Aug 24 23:33:29.698: INFO: Got endpoints: latency-svc-4pk2f [1.281123392s] Aug 24 23:33:29.705: INFO: Created: latency-svc-w22gf Aug 24 23:33:29.718: INFO: Got endpoints: latency-svc-w22gf [1.248607522s] Aug 24 23:33:29.749: INFO: Created: latency-svc-db7xq Aug 24 23:33:29.761: INFO: Got endpoints: latency-svc-db7xq [1.178615494s] Aug 24 23:33:29.785: INFO: Created: latency-svc-z5tmp Aug 24 23:33:29.797: INFO: Got endpoints: latency-svc-z5tmp [1.172215522s] Aug 24 23:33:29.880: INFO: Created: latency-svc-8zsmg Aug 24 23:33:29.909: INFO: Got endpoints: latency-svc-8zsmg [1.181328708s] Aug 24 23:33:29.946: INFO: Created: latency-svc-7wlvw Aug 24 23:33:30.010: INFO: Got endpoints: latency-svc-7wlvw [1.245058948s] Aug 24 23:33:30.024: INFO: Created: latency-svc-vdh8x Aug 24 23:33:30.039: INFO: Got endpoints: latency-svc-vdh8x [1.179792469s] Aug 24 23:33:30.089: INFO: Created: latency-svc-jblzj Aug 24 23:33:30.103: INFO: Got endpoints: latency-svc-jblzj [1.171762761s] Aug 24 23:33:30.213: INFO: Created: latency-svc-n2492 Aug 24 23:33:30.248: INFO: Got endpoints: latency-svc-n2492 [1.213934409s] Aug 24 23:33:30.300: INFO: Created: latency-svc-fnbl2 Aug 24 23:33:30.308: INFO: Got endpoints: latency-svc-fnbl2 [1.20160269s] Aug 24 23:33:30.372: INFO: Created: latency-svc-6tdv2 Aug 24 23:33:30.386: INFO: Got endpoints: latency-svc-6tdv2 [1.136820694s] Aug 24 23:33:30.549: INFO: Created: latency-svc-6wwht Aug 24 23:33:30.554: INFO: Got endpoints: latency-svc-6wwht [1.184952153s] Aug 24 23:33:30.607: INFO: Created: latency-svc-hgcx9 Aug 24 23:33:30.614: INFO: Got endpoints: latency-svc-hgcx9 [1.179110193s] Aug 24 23:33:30.635: INFO: Created: latency-svc-dvhzd Aug 24 23:33:30.716: INFO: Got endpoints: latency-svc-dvhzd [1.167867013s] Aug 24 23:33:30.719: INFO: Created: latency-svc-7bkvd Aug 24 23:33:30.734: INFO: Got endpoints: latency-svc-7bkvd [1.142426211s] Aug 24 23:33:30.756: INFO: Created: latency-svc-2twgq Aug 24 23:33:30.774: INFO: Got endpoints: latency-svc-2twgq [1.075864233s] Aug 24 23:33:30.800: INFO: Created: latency-svc-7hr45 Aug 24 23:33:30.860: INFO: Got endpoints: latency-svc-7hr45 [1.141192012s] Aug 24 23:33:30.869: INFO: Created: latency-svc-5wq5v Aug 24 23:33:30.882: INFO: Got endpoints: latency-svc-5wq5v [1.121144655s] Aug 24 23:33:30.905: INFO: Created: latency-svc-49cnt Aug 24 23:33:30.919: INFO: Got endpoints: latency-svc-49cnt [1.121967177s] Aug 24 23:33:30.941: INFO: Created: latency-svc-jvhm7 Aug 24 23:33:31.053: INFO: Got endpoints: latency-svc-jvhm7 [1.14345603s] Aug 24 23:33:31.085: INFO: Created: latency-svc-pbrvf Aug 24 23:33:31.106: INFO: Got endpoints: latency-svc-pbrvf [1.095877191s] Aug 24 23:33:31.133: INFO: Created: latency-svc-272rt Aug 24 23:33:31.189: INFO: Got endpoints: latency-svc-272rt [1.150081331s] Aug 24 23:33:31.230: INFO: Created: latency-svc-k8k7k Aug 24 23:33:31.243: INFO: Got endpoints: latency-svc-k8k7k [1.139464057s] Aug 24 23:33:31.267: INFO: Created: latency-svc-jq2xr Aug 24 23:33:31.280: INFO: Got endpoints: latency-svc-jq2xr [1.032293515s] Aug 24 23:33:31.339: INFO: Created: latency-svc-5txpz Aug 24 23:33:31.343: INFO: Got endpoints: latency-svc-5txpz [1.035447389s] Aug 24 23:33:31.409: INFO: Created: latency-svc-9bdmp Aug 24 23:33:31.424: INFO: Got endpoints: latency-svc-9bdmp [1.037587368s] Aug 24 23:33:31.489: INFO: Created: latency-svc-hrmhb Aug 24 23:33:31.493: INFO: Got endpoints: latency-svc-hrmhb [939.631713ms] Aug 24 23:33:31.578: INFO: Created: latency-svc-wn4bs Aug 24 23:33:31.651: INFO: Got endpoints: latency-svc-wn4bs [1.036492976s] Aug 24 23:33:31.653: INFO: Created: latency-svc-j5rgt Aug 24 23:33:31.673: INFO: Got endpoints: latency-svc-j5rgt [956.791959ms] Aug 24 23:33:31.693: INFO: Created: latency-svc-m458m Aug 24 23:33:31.737: INFO: Got endpoints: latency-svc-m458m [1.003092191s] Aug 24 23:33:31.794: INFO: Created: latency-svc-z8fhq Aug 24 23:33:31.803: INFO: Got endpoints: latency-svc-z8fhq [1.028946463s] Aug 24 23:33:31.830: INFO: Created: latency-svc-j9r52 Aug 24 23:33:31.845: INFO: Got endpoints: latency-svc-j9r52 [985.673237ms] Aug 24 23:33:31.871: INFO: Created: latency-svc-v48n6 Aug 24 23:33:31.889: INFO: Got endpoints: latency-svc-v48n6 [1.006962949s] Aug 24 23:33:31.937: INFO: Created: latency-svc-8qzp4 Aug 24 23:33:31.942: INFO: Got endpoints: latency-svc-8qzp4 [1.022937594s] Aug 24 23:33:31.969: INFO: Created: latency-svc-772p5 Aug 24 23:33:31.984: INFO: Got endpoints: latency-svc-772p5 [931.619993ms] Aug 24 23:33:32.004: INFO: Created: latency-svc-fdx22 Aug 24 23:33:32.024: INFO: Got endpoints: latency-svc-fdx22 [918.205644ms] Aug 24 23:33:32.081: INFO: Created: latency-svc-rzxg4 Aug 24 23:33:32.086: INFO: Got endpoints: latency-svc-rzxg4 [896.739913ms] Aug 24 23:33:32.129: INFO: Created: latency-svc-z5lld Aug 24 23:33:32.144: INFO: Got endpoints: latency-svc-z5lld [900.82861ms] Aug 24 23:33:32.273: INFO: Created: latency-svc-qqrsm Aug 24 23:33:32.284: INFO: Got endpoints: latency-svc-qqrsm [1.00439623s] Aug 24 23:33:32.321: INFO: Created: latency-svc-htgnv Aug 24 23:33:32.349: INFO: Got endpoints: latency-svc-htgnv [1.005941963s] Aug 24 23:33:32.417: INFO: Created: latency-svc-4fhpp Aug 24 23:33:32.420: INFO: Got endpoints: latency-svc-4fhpp [996.6508ms] Aug 24 23:33:32.454: INFO: Created: latency-svc-sjbzr Aug 24 23:33:32.476: INFO: Got endpoints: latency-svc-sjbzr [982.583342ms] Aug 24 23:33:32.578: INFO: Created: latency-svc-4mwb9 Aug 24 23:33:32.581: INFO: Got endpoints: latency-svc-4mwb9 [930.757858ms] Aug 24 23:33:32.609: INFO: Created: latency-svc-nsr8j Aug 24 23:33:32.625: INFO: Got endpoints: latency-svc-nsr8j [952.523041ms] Aug 24 23:33:32.646: INFO: Created: latency-svc-zq2h8 Aug 24 23:33:32.661: INFO: Got endpoints: latency-svc-zq2h8 [923.945616ms] Aug 24 23:33:32.740: INFO: Created: latency-svc-sq45r Aug 24 23:33:32.764: INFO: Got endpoints: latency-svc-sq45r [961.901874ms] Aug 24 23:33:32.766: INFO: Created: latency-svc-hdghh Aug 24 23:33:32.794: INFO: Got endpoints: latency-svc-hdghh [948.864083ms] Aug 24 23:33:32.825: INFO: Created: latency-svc-nnrct Aug 24 23:33:32.877: INFO: Got endpoints: latency-svc-nnrct [988.043138ms] Aug 24 23:33:32.885: INFO: Created: latency-svc-n9rcs Aug 24 23:33:32.916: INFO: Got endpoints: latency-svc-n9rcs [973.95212ms] Aug 24 23:33:32.952: INFO: Created: latency-svc-nts74 Aug 24 23:33:32.962: INFO: Got endpoints: latency-svc-nts74 [977.70759ms] Aug 24 23:33:33.016: INFO: Created: latency-svc-jpp42 Aug 24 23:33:33.041: INFO: Got endpoints: latency-svc-jpp42 [1.016923111s] Aug 24 23:33:33.102: INFO: Created: latency-svc-lzg46 Aug 24 23:33:33.177: INFO: Got endpoints: latency-svc-lzg46 [1.091772379s] Aug 24 23:33:33.180: INFO: Created: latency-svc-b9pbb Aug 24 23:33:33.191: INFO: Got endpoints: latency-svc-b9pbb [1.04701099s] Aug 24 23:33:33.214: INFO: Created: latency-svc-ljjq4 Aug 24 23:33:33.227: INFO: Got endpoints: latency-svc-ljjq4 [942.917859ms] Aug 24 23:33:33.250: INFO: Created: latency-svc-rgbkw Aug 24 23:33:33.264: INFO: Got endpoints: latency-svc-rgbkw [914.357503ms] Aug 24 23:33:33.321: INFO: Created: latency-svc-9z9vt Aug 24 23:33:33.326: INFO: Got endpoints: latency-svc-9z9vt [905.415996ms] Aug 24 23:33:33.408: INFO: Created: latency-svc-gz54c Aug 24 23:33:33.476: INFO: Got endpoints: latency-svc-gz54c [1.000495784s] Aug 24 23:33:33.478: INFO: Created: latency-svc-tk7mk Aug 24 23:33:33.481: INFO: Got endpoints: latency-svc-tk7mk [899.584334ms] Aug 24 23:33:33.521: INFO: Created: latency-svc-g2z2d Aug 24 23:33:33.535: INFO: Got endpoints: latency-svc-g2z2d [909.313815ms] Aug 24 23:33:33.563: INFO: Created: latency-svc-br9mh Aug 24 23:33:33.656: INFO: Got endpoints: latency-svc-br9mh [994.821048ms] Aug 24 23:33:33.684: INFO: Created: latency-svc-qg74g Aug 24 23:33:33.697: INFO: Got endpoints: latency-svc-qg74g [932.535919ms] Aug 24 23:33:33.719: INFO: Created: latency-svc-5lkv9 Aug 24 23:33:33.755: INFO: Got endpoints: latency-svc-5lkv9 [960.417097ms] Aug 24 23:33:33.830: INFO: Created: latency-svc-wk2dc Aug 24 23:33:33.835: INFO: Got endpoints: latency-svc-wk2dc [958.154375ms] Aug 24 23:33:33.857: INFO: Created: latency-svc-x7xsx Aug 24 23:33:33.871: INFO: Got endpoints: latency-svc-x7xsx [955.561199ms] Aug 24 23:33:33.917: INFO: Created: latency-svc-tt84k Aug 24 23:33:33.985: INFO: Got endpoints: latency-svc-tt84k [1.022775768s] Aug 24 23:33:33.988: INFO: Created: latency-svc-g8swd Aug 24 23:33:33.992: INFO: Got endpoints: latency-svc-g8swd [950.983884ms] Aug 24 23:33:34.042: INFO: Created: latency-svc-s6dbf Aug 24 23:33:34.058: INFO: Got endpoints: latency-svc-s6dbf [880.81404ms] Aug 24 23:33:34.080: INFO: Created: latency-svc-h87dd Aug 24 23:33:34.141: INFO: Got endpoints: latency-svc-h87dd [949.810369ms] Aug 24 23:33:34.142: INFO: Created: latency-svc-5kh2s Aug 24 23:33:34.155: INFO: Got endpoints: latency-svc-5kh2s [927.142225ms] Aug 24 23:33:34.180: INFO: Created: latency-svc-drsk6 Aug 24 23:33:34.234: INFO: Got endpoints: latency-svc-drsk6 [970.384433ms] Aug 24 23:33:34.314: INFO: Created: latency-svc-tqkxm Aug 24 23:33:34.337: INFO: Got endpoints: latency-svc-tqkxm [1.011501763s] Aug 24 23:33:34.373: INFO: Created: latency-svc-s92kz Aug 24 23:33:34.389: INFO: Got endpoints: latency-svc-s92kz [912.756072ms] Aug 24 23:33:34.476: INFO: Created: latency-svc-6jjk2 Aug 24 23:33:34.479: INFO: Got endpoints: latency-svc-6jjk2 [997.903924ms] Aug 24 23:33:34.571: INFO: Created: latency-svc-qqdgq Aug 24 23:33:34.674: INFO: Got endpoints: latency-svc-qqdgq [1.138791158s] Aug 24 23:33:34.676: INFO: Created: latency-svc-lqj2p Aug 24 23:33:34.681: INFO: Got endpoints: latency-svc-lqj2p [1.024815295s] Aug 24 23:33:34.708: INFO: Created: latency-svc-x46bv Aug 24 23:33:34.717: INFO: Got endpoints: latency-svc-x46bv [1.020248205s] Aug 24 23:33:34.738: INFO: Created: latency-svc-69hnh Aug 24 23:33:34.748: INFO: Got endpoints: latency-svc-69hnh [993.631846ms] Aug 24 23:33:34.823: INFO: Created: latency-svc-n6pct Aug 24 23:33:34.826: INFO: Got endpoints: latency-svc-n6pct [990.192124ms] Aug 24 23:33:34.859: INFO: Created: latency-svc-qzlmh Aug 24 23:33:34.888: INFO: Got endpoints: latency-svc-qzlmh [1.016409928s] Aug 24 23:33:34.974: INFO: Created: latency-svc-njmvl Aug 24 23:33:35.009: INFO: Got endpoints: latency-svc-njmvl [1.023843331s] Aug 24 23:33:35.040: INFO: Created: latency-svc-rjpbl Aug 24 23:33:35.055: INFO: Got endpoints: latency-svc-rjpbl [1.063165241s] Aug 24 23:33:35.129: INFO: Created: latency-svc-rldgq Aug 24 23:33:35.146: INFO: Got endpoints: latency-svc-rldgq [1.087415771s] Aug 24 23:33:35.188: INFO: Created: latency-svc-b78hz Aug 24 23:33:35.200: INFO: Got endpoints: latency-svc-b78hz [1.058920738s] Aug 24 23:33:35.226: INFO: Created: latency-svc-mg5vc Aug 24 23:33:35.291: INFO: Got endpoints: latency-svc-mg5vc [1.135936485s] Aug 24 23:33:35.293: INFO: Created: latency-svc-6hznx Aug 24 23:33:35.302: INFO: Got endpoints: latency-svc-6hznx [1.067562547s] Aug 24 23:33:35.326: INFO: Created: latency-svc-sxwg2 Aug 24 23:33:35.338: INFO: Got endpoints: latency-svc-sxwg2 [1.000751235s] Aug 24 23:33:35.362: INFO: Created: latency-svc-fmwdc Aug 24 23:33:35.375: INFO: Got endpoints: latency-svc-fmwdc [985.692165ms] Aug 24 23:33:35.450: INFO: Created: latency-svc-cm5jl Aug 24 23:33:35.462: INFO: Got endpoints: latency-svc-cm5jl [983.388453ms] Aug 24 23:33:35.513: INFO: Created: latency-svc-zjzbp Aug 24 23:33:35.526: INFO: Got endpoints: latency-svc-zjzbp [852.223048ms] Aug 24 23:33:35.585: INFO: Created: latency-svc-v68j6 Aug 24 23:33:35.608: INFO: Got endpoints: latency-svc-v68j6 [926.314124ms] Aug 24 23:33:35.645: INFO: Created: latency-svc-bkltb Aug 24 23:33:35.658: INFO: Got endpoints: latency-svc-bkltb [940.089515ms] Aug 24 23:33:35.751: INFO: Created: latency-svc-h64fh Aug 24 23:33:35.755: INFO: Got endpoints: latency-svc-h64fh [1.00614105s] Aug 24 23:33:35.789: INFO: Created: latency-svc-z57rp Aug 24 23:33:35.824: INFO: Got endpoints: latency-svc-z57rp [997.985223ms] Aug 24 23:33:35.889: INFO: Created: latency-svc-vzswx Aug 24 23:33:35.914: INFO: Created: latency-svc-4xc8d Aug 24 23:33:35.914: INFO: Got endpoints: latency-svc-vzswx [1.026423035s] Aug 24 23:33:35.931: INFO: Got endpoints: latency-svc-4xc8d [922.337704ms] Aug 24 23:33:35.962: INFO: Created: latency-svc-2tvmj Aug 24 23:33:35.979: INFO: Got endpoints: latency-svc-2tvmj [923.876171ms] Aug 24 23:33:36.054: INFO: Created: latency-svc-v4nl4 Aug 24 23:33:36.076: INFO: Got endpoints: latency-svc-v4nl4 [929.949669ms] Aug 24 23:33:36.119: INFO: Created: latency-svc-g6s2z Aug 24 23:33:36.225: INFO: Got endpoints: latency-svc-g6s2z [1.025030397s] Aug 24 23:33:36.228: INFO: Created: latency-svc-pqh9s Aug 24 23:33:36.261: INFO: Got endpoints: latency-svc-pqh9s [970.797476ms] Aug 24 23:33:36.285: INFO: Created: latency-svc-ngtc9 Aug 24 23:33:36.298: INFO: Got endpoints: latency-svc-ngtc9 [996.58299ms] Aug 24 23:33:36.321: INFO: Created: latency-svc-8dh7t Aug 24 23:33:36.392: INFO: Got endpoints: latency-svc-8dh7t [1.053718158s] Aug 24 23:33:36.413: INFO: Created: latency-svc-fpsb7 Aug 24 23:33:36.430: INFO: Got endpoints: latency-svc-fpsb7 [1.05489752s] Aug 24 23:33:36.461: INFO: Created: latency-svc-bvx7m Aug 24 23:33:36.473: INFO: Got endpoints: latency-svc-bvx7m [1.01004115s] Aug 24 23:33:36.572: INFO: Created: latency-svc-cml99 Aug 24 23:33:36.575: INFO: Got endpoints: latency-svc-cml99 [1.049050015s] Aug 24 23:33:36.615: INFO: Created: latency-svc-fhmxg Aug 24 23:33:36.646: INFO: Got endpoints: latency-svc-fhmxg [1.037901664s] Aug 24 23:33:36.671: INFO: Created: latency-svc-59494 Aug 24 23:33:36.728: INFO: Got endpoints: latency-svc-59494 [1.070138269s] Aug 24 23:33:36.749: INFO: Created: latency-svc-kfngp Aug 24 23:33:36.761: INFO: Got endpoints: latency-svc-kfngp [1.006719959s] Aug 24 23:33:36.826: INFO: Created: latency-svc-t6q6j Aug 24 23:33:36.871: INFO: Got endpoints: latency-svc-t6q6j [1.047567273s] Aug 24 23:33:36.880: INFO: Created: latency-svc-lh52l Aug 24 23:33:36.904: INFO: Got endpoints: latency-svc-lh52l [989.915395ms] Aug 24 23:33:36.941: INFO: Created: latency-svc-6x2hw Aug 24 23:33:36.954: INFO: Got endpoints: latency-svc-6x2hw [1.022878218s] Aug 24 23:33:37.015: INFO: Created: latency-svc-sntt4 Aug 24 23:33:37.019: INFO: Got endpoints: latency-svc-sntt4 [1.039727748s] Aug 24 23:33:37.053: INFO: Created: latency-svc-tz42c Aug 24 23:33:37.069: INFO: Got endpoints: latency-svc-tz42c [993.1602ms] Aug 24 23:33:37.090: INFO: Created: latency-svc-xhnrb Aug 24 23:33:37.105: INFO: Got endpoints: latency-svc-xhnrb [880.385961ms] Aug 24 23:33:37.183: INFO: Created: latency-svc-22jrj Aug 24 23:33:37.190: INFO: Got endpoints: latency-svc-22jrj [928.409347ms] Aug 24 23:33:37.211: INFO: Created: latency-svc-rdccv Aug 24 23:33:37.219: INFO: Got endpoints: latency-svc-rdccv [920.994028ms] Aug 24 23:33:37.245: INFO: Created: latency-svc-4bmmk Aug 24 23:33:37.276: INFO: Got endpoints: latency-svc-4bmmk [883.702918ms] Aug 24 23:33:37.351: INFO: Created: latency-svc-tb75w Aug 24 23:33:37.358: INFO: Got endpoints: latency-svc-tb75w [927.95349ms] Aug 24 23:33:37.385: INFO: Created: latency-svc-9cj95 Aug 24 23:33:37.399: INFO: Got endpoints: latency-svc-9cj95 [926.285293ms] Aug 24 23:33:37.420: INFO: Created: latency-svc-8rql9 Aug 24 23:33:37.435: INFO: Got endpoints: latency-svc-8rql9 [860.112324ms] Aug 24 23:33:37.502: INFO: Created: latency-svc-xhjm4 Aug 24 23:33:37.515: INFO: Got endpoints: latency-svc-xhjm4 [869.180024ms] Aug 24 23:33:37.577: INFO: Created: latency-svc-4mmwd Aug 24 23:33:37.674: INFO: Got endpoints: latency-svc-4mmwd [946.324878ms] Aug 24 23:33:37.677: INFO: Created: latency-svc-2c9nw Aug 24 23:33:37.713: INFO: Got endpoints: latency-svc-2c9nw [951.220288ms] Aug 24 23:33:37.761: INFO: Created: latency-svc-qlwhn Aug 24 23:33:37.848: INFO: Got endpoints: latency-svc-qlwhn [977.014248ms] Aug 24 23:33:37.850: INFO: Created: latency-svc-6wxm4 Aug 24 23:33:37.870: INFO: Got endpoints: latency-svc-6wxm4 [965.654511ms] Aug 24 23:33:37.899: INFO: Created: latency-svc-rlww6 Aug 24 23:33:37.916: INFO: Got endpoints: latency-svc-rlww6 [961.939178ms] Aug 24 23:33:37.935: INFO: Created: latency-svc-dwsk8 Aug 24 23:33:38.021: INFO: Got endpoints: latency-svc-dwsk8 [1.002482047s] Aug 24 23:33:38.024: INFO: Created: latency-svc-xvpzd Aug 24 23:33:38.031: INFO: Got endpoints: latency-svc-xvpzd [961.919727ms] Aug 24 23:33:38.050: INFO: Created: latency-svc-96865 Aug 24 23:33:38.087: INFO: Got endpoints: latency-svc-96865 [981.462564ms] Aug 24 23:33:38.110: INFO: Created: latency-svc-r85ds Aug 24 23:33:38.213: INFO: Got endpoints: latency-svc-r85ds [1.022742805s] Aug 24 23:33:38.215: INFO: Created: latency-svc-lmp4b Aug 24 23:33:38.223: INFO: Got endpoints: latency-svc-lmp4b [1.003972314s] Aug 24 23:33:38.266: INFO: Created: latency-svc-xkbff Aug 24 23:33:38.291: INFO: Got endpoints: latency-svc-xkbff [1.015251981s] Aug 24 23:33:38.363: INFO: Created: latency-svc-jcndm Aug 24 23:33:38.374: INFO: Got endpoints: latency-svc-jcndm [1.015924319s] Aug 24 23:33:38.404: INFO: Created: latency-svc-v69wr Aug 24 23:33:38.425: INFO: Got endpoints: latency-svc-v69wr [1.025679628s] Aug 24 23:33:38.457: INFO: Created: latency-svc-qjb8t Aug 24 23:33:38.514: INFO: Got endpoints: latency-svc-qjb8t [1.07874393s] Aug 24 23:33:38.567: INFO: Created: latency-svc-wmxf5 Aug 24 23:33:38.591: INFO: Got endpoints: latency-svc-wmxf5 [1.076286137s] Aug 24 23:33:38.645: INFO: Created: latency-svc-pdhgl Aug 24 23:33:38.654: INFO: Got endpoints: latency-svc-pdhgl [980.349613ms] Aug 24 23:33:38.722: INFO: Created: latency-svc-v6p99 Aug 24 23:33:38.800: INFO: Got endpoints: latency-svc-v6p99 [1.087383759s] Aug 24 23:33:38.802: INFO: Created: latency-svc-4jxbg Aug 24 23:33:38.814: INFO: Got endpoints: latency-svc-4jxbg [965.551112ms] Aug 24 23:33:38.853: INFO: Created: latency-svc-cpm9b Aug 24 23:33:38.883: INFO: Got endpoints: latency-svc-cpm9b [1.01308559s] Aug 24 23:33:38.950: INFO: Created: latency-svc-zfx24 Aug 24 23:33:38.956: INFO: Got endpoints: latency-svc-zfx24 [1.0400517s] Aug 24 23:33:38.986: INFO: Created: latency-svc-pjhzc Aug 24 23:33:39.008: INFO: Got endpoints: latency-svc-pjhzc [986.316826ms] Aug 24 23:33:39.047: INFO: Created: latency-svc-bntgf Aug 24 23:33:39.117: INFO: Got endpoints: latency-svc-bntgf [1.086457022s] Aug 24 23:33:39.119: INFO: Created: latency-svc-kwtd2 Aug 24 23:33:39.134: INFO: Got endpoints: latency-svc-kwtd2 [1.047238307s] Aug 24 23:33:39.160: INFO: Created: latency-svc-l92zs Aug 24 23:33:39.185: INFO: Got endpoints: latency-svc-l92zs [971.964888ms] Aug 24 23:33:39.214: INFO: Created: latency-svc-qnc88 Aug 24 23:33:39.285: INFO: Got endpoints: latency-svc-qnc88 [1.061264969s] Aug 24 23:33:39.287: INFO: Created: latency-svc-cz96k Aug 24 23:33:39.297: INFO: Got endpoints: latency-svc-cz96k [1.005944958s] Aug 24 23:33:39.321: INFO: Created: latency-svc-qx7b7 Aug 24 23:33:39.345: INFO: Got endpoints: latency-svc-qx7b7 [971.203808ms] Aug 24 23:33:39.446: INFO: Created: latency-svc-58r4r Aug 24 23:33:39.478: INFO: Got endpoints: latency-svc-58r4r [1.053523841s] Aug 24 23:33:39.478: INFO: Created: latency-svc-hhwwj Aug 24 23:33:39.495: INFO: Got endpoints: latency-svc-hhwwj [980.556248ms] Aug 24 23:33:39.525: INFO: Created: latency-svc-zjc88 Aug 24 23:33:39.538: INFO: Got endpoints: latency-svc-zjc88 [946.771634ms] Aug 24 23:33:39.614: INFO: Created: latency-svc-hfbg9 Aug 24 23:33:39.622: INFO: Got endpoints: latency-svc-hfbg9 [967.700537ms] Aug 24 23:33:39.646: INFO: Created: latency-svc-kxwdf Aug 24 23:33:39.671: INFO: Got endpoints: latency-svc-kxwdf [870.838939ms] Aug 24 23:33:39.765: INFO: Created: latency-svc-4pr5c Aug 24 23:33:39.795: INFO: Created: latency-svc-z4mjh Aug 24 23:33:39.795: INFO: Got endpoints: latency-svc-4pr5c [981.458144ms] Aug 24 23:33:39.826: INFO: Got endpoints: latency-svc-z4mjh [943.058123ms] Aug 24 23:33:39.863: INFO: Created: latency-svc-phhn8 Aug 24 23:33:39.913: INFO: Got endpoints: latency-svc-phhn8 [956.520897ms] Aug 24 23:33:39.922: INFO: Created: latency-svc-zj74q Aug 24 23:33:39.935: INFO: Got endpoints: latency-svc-zj74q [927.12072ms] Aug 24 23:33:39.957: INFO: Created: latency-svc-65fzc Aug 24 23:33:39.972: INFO: Got endpoints: latency-svc-65fzc [854.649163ms] Aug 24 23:33:39.995: INFO: Created: latency-svc-59vqx Aug 24 23:33:40.092: INFO: Created: latency-svc-t9vdt Aug 24 23:33:40.092: INFO: Got endpoints: latency-svc-59vqx [958.044383ms] Aug 24 23:33:40.120: INFO: Got endpoints: latency-svc-t9vdt [935.36547ms] Aug 24 23:33:40.148: INFO: Created: latency-svc-khgdg Aug 24 23:33:40.170: INFO: Got endpoints: latency-svc-khgdg [885.554763ms] Aug 24 23:33:40.285: INFO: Created: latency-svc-tnt42 Aug 24 23:33:40.296: INFO: Got endpoints: latency-svc-tnt42 [998.580412ms] Aug 24 23:33:40.319: INFO: Created: latency-svc-2n5xn Aug 24 23:33:40.347: INFO: Got endpoints: latency-svc-2n5xn [1.001279802s] Aug 24 23:33:40.377: INFO: Created: latency-svc-6h4kr Aug 24 23:33:40.418: INFO: Got endpoints: latency-svc-6h4kr [940.112901ms] Aug 24 23:33:40.430: INFO: Created: latency-svc-fxdr5 Aug 24 23:33:40.447: INFO: Got endpoints: latency-svc-fxdr5 [952.056983ms] Aug 24 23:33:40.475: INFO: Created: latency-svc-nn47s Aug 24 23:33:40.489: INFO: Got endpoints: latency-svc-nn47s [950.819623ms] Aug 24 23:33:40.511: INFO: Created: latency-svc-lpr6p Aug 24 23:33:40.584: INFO: Got endpoints: latency-svc-lpr6p [961.622636ms] Aug 24 23:33:40.611: INFO: Created: latency-svc-8p9tg Aug 24 23:33:40.642: INFO: Got endpoints: latency-svc-8p9tg [970.756881ms] Aug 24 23:33:40.672: INFO: Created: latency-svc-bvls2 Aug 24 23:33:40.721: INFO: Got endpoints: latency-svc-bvls2 [925.910751ms] Aug 24 23:33:40.722: INFO: Latencies: [290.716966ms 407.261934ms 687.177601ms 852.223048ms 854.649163ms 860.112324ms 869.180024ms 870.838939ms 880.385961ms 880.81404ms 883.702918ms 885.554763ms 896.739913ms 899.584334ms 900.82861ms 905.415996ms 909.313815ms 912.756072ms 914.357503ms 918.205644ms 920.994028ms 922.337704ms 923.876171ms 923.945616ms 925.910751ms 926.285293ms 926.314124ms 927.12072ms 927.142225ms 927.95349ms 928.409347ms 929.949669ms 930.757858ms 931.619993ms 932.535919ms 935.36547ms 939.631713ms 940.089515ms 940.112901ms 942.917859ms 943.058123ms 946.324878ms 946.771634ms 948.864083ms 949.810369ms 950.819623ms 950.983884ms 951.220288ms 952.056983ms 952.523041ms 955.561199ms 956.520897ms 956.791959ms 958.044383ms 958.154375ms 960.417097ms 961.622636ms 961.901874ms 961.919727ms 961.939178ms 965.551112ms 965.654511ms 967.700537ms 970.384433ms 970.756881ms 970.797476ms 971.203808ms 971.964888ms 973.95212ms 977.014248ms 977.70759ms 979.987533ms 980.349613ms 980.556248ms 981.458144ms 981.462564ms 982.583342ms 983.388453ms 985.673237ms 985.692165ms 986.316826ms 988.043138ms 989.915395ms 990.192124ms 993.1602ms 993.631846ms 994.821048ms 996.58299ms 996.6508ms 997.903924ms 997.985223ms 998.580412ms 1.000495784s 1.000751235s 1.001279802s 1.002482047s 1.003092191s 1.003972314s 1.00439623s 1.005941963s 1.005944958s 1.00614105s 1.006719959s 1.006962949s 1.01004115s 1.011501763s 1.01308559s 1.015251981s 1.015924319s 1.016409928s 1.016923111s 1.020248205s 1.022742805s 1.022775768s 1.022878218s 1.022937594s 1.023843331s 1.024815295s 1.025030397s 1.025679628s 1.026423035s 1.028946463s 1.032293515s 1.035447389s 1.036492976s 1.037587368s 1.037901664s 1.039727748s 1.0400517s 1.04701099s 1.047238307s 1.047567273s 1.049050015s 1.053523841s 1.053718158s 1.05489752s 1.058920738s 1.061264969s 1.063165241s 1.067562547s 1.070138269s 1.075864233s 1.076286137s 1.07874393s 1.086457022s 1.087383759s 1.087415771s 1.091772379s 1.095877191s 1.121144655s 1.121967177s 1.135936485s 1.136820694s 1.138791158s 1.139464057s 1.141192012s 1.142426211s 1.14345603s 1.150081331s 1.167867013s 1.171762761s 1.172215522s 1.178615494s 1.179110193s 1.179792469s 1.181328708s 1.184952153s 1.20160269s 1.211551524s 1.213934409s 1.220691269s 1.226890383s 1.243328765s 1.245058948s 1.248607522s 1.262836735s 1.263651103s 1.281123392s 1.294221888s 1.309282816s 1.320940561s 1.399692419s 1.503663429s 1.56045511s 1.673241032s 1.827521034s 2.043218732s 2.137145963s 2.149112783s 2.36605779s 2.508132185s 2.59609472s 2.605664004s 2.688957817s 2.71884533s 2.817378808s 2.831180217s 2.844951109s 2.870565365s 2.990453492s] Aug 24 23:33:40.722: INFO: 50 %ile: 1.005944958s Aug 24 23:33:40.722: INFO: 90 %ile: 1.320940561s Aug 24 23:33:40.722: INFO: 99 %ile: 2.870565365s Aug 24 23:33:40.722: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:33:40.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2304" for this suite. • [SLOW TEST:19.999 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":35,"skipped":579,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:33:40.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2725.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2725.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2725.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2725.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2725.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.203.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.203.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.203.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.203.6_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2725.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2725.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2725.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2725.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2725.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2725.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.203.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.203.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.203.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.203.6_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 24 23:33:49.097: INFO: Unable to read wheezy_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.124: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.169: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.204: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.324: INFO: Unable to read jessie_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.327: INFO: Unable to read jessie_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.347: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.381: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:49.437: INFO: Lookups using dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca failed for: [wheezy_udp@dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_udp@dns-test-service.dns-2725.svc.cluster.local jessie_tcp@dns-test-service.dns-2725.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local] Aug 24 23:33:54.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:55.019: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:55.178: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:55.479: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:56.651: INFO: Unable to read jessie_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:56.664: INFO: Unable to read jessie_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:56.678: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:56.707: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:56.956: INFO: Lookups using dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca failed for: [wheezy_udp@dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_udp@dns-test-service.dns-2725.svc.cluster.local jessie_tcp@dns-test-service.dns-2725.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local] Aug 24 23:33:59.448: INFO: Unable to read wheezy_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.477: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.480: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.501: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.682: INFO: Unable to read jessie_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.759: INFO: Unable to read jessie_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.765: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.774: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:33:59.933: INFO: Lookups using dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca failed for: [wheezy_udp@dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_udp@dns-test-service.dns-2725.svc.cluster.local jessie_tcp@dns-test-service.dns-2725.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local] Aug 24 23:34:04.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:04.526: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:04.590: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:04.598: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:04.982: INFO: Unable to read jessie_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:05.024: INFO: Unable to read jessie_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:05.070: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:05.078: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:05.235: INFO: Lookups using dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca failed for: [wheezy_udp@dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_udp@dns-test-service.dns-2725.svc.cluster.local jessie_tcp@dns-test-service.dns-2725.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local] Aug 24 23:34:09.476: INFO: Unable to read wheezy_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:09.525: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:09.529: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:09.550: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:09.730: INFO: Unable to read jessie_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:09.794: INFO: Unable to read jessie_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:09.797: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:09.808: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:10.010: INFO: Lookups using dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca failed for: [wheezy_udp@dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_udp@dns-test-service.dns-2725.svc.cluster.local jessie_tcp@dns-test-service.dns-2725.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local] Aug 24 23:34:14.451: INFO: Unable to read wheezy_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.455: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.544: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.553: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.681: INFO: Unable to read jessie_udp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.686: INFO: Unable to read jessie_tcp@dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.692: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.712: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local from pod dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca: the server could not find the requested resource (get pods dns-test-e5e84236-ced3-4903-915c-33c623375fca) Aug 24 23:34:14.945: INFO: Lookups using dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca failed for: [wheezy_udp@dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@dns-test-service.dns-2725.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_udp@dns-test-service.dns-2725.svc.cluster.local jessie_tcp@dns-test-service.dns-2725.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2725.svc.cluster.local] Aug 24 23:34:19.495: INFO: DNS probes using dns-2725/dns-test-e5e84236-ced3-4903-915c-33c623375fca succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:34:20.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2725" for this suite. • [SLOW TEST:39.996 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":36,"skipped":597,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:34:20.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 24 23:34:20.941: INFO: starting watch STEP: patching STEP: updating Aug 24 23:34:20.953: INFO: waiting for watch events with expected annotations Aug 24 23:34:20.953: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:34:21.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-7993" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":37,"skipped":653,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:34:21.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:34:21.891: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:34:23.902: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908861, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908861, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908862, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733908861, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:34:26.951: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:34:26.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2785-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:34:28.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5319" for this suite. STEP: Destroying namespace "webhook-5319-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.163 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":38,"skipped":661,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:34:28.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:34:44.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2251" for this suite. • [SLOW TEST:16.529 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":39,"skipped":670,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:34:44.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2150 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2150 I0824 23:34:44.998052 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2150, replica count: 2 I0824 23:34:48.048557 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:34:51.048887 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 24 23:34:51.048: INFO: Creating new exec pod Aug 24 23:34:56.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodk45n5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 24 23:34:59.349: INFO: stderr: "I0824 23:34:59.249463 130 log.go:181] (0xc00003b810) (0xc000158320) Create stream\nI0824 23:34:59.249545 130 log.go:181] (0xc00003b810) (0xc000158320) Stream added, broadcasting: 1\nI0824 23:34:59.251615 130 log.go:181] (0xc00003b810) Reply frame received for 1\nI0824 23:34:59.251651 130 log.go:181] (0xc00003b810) (0xc000644000) Create stream\nI0824 23:34:59.251661 130 log.go:181] (0xc00003b810) (0xc000644000) Stream added, broadcasting: 3\nI0824 23:34:59.252585 130 log.go:181] (0xc00003b810) Reply frame received for 3\nI0824 23:34:59.252650 130 log.go:181] (0xc00003b810) (0xc000e8a000) Create stream\nI0824 23:34:59.252674 130 log.go:181] (0xc00003b810) (0xc000e8a000) Stream added, broadcasting: 5\nI0824 23:34:59.253925 130 log.go:181] (0xc00003b810) Reply frame received for 5\nI0824 23:34:59.336318 130 log.go:181] (0xc00003b810) Data frame received for 5\nI0824 23:34:59.336347 130 log.go:181] (0xc000e8a000) (5) Data frame handling\nI0824 23:34:59.336379 130 log.go:181] (0xc000e8a000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0824 23:34:59.336643 130 log.go:181] (0xc00003b810) Data frame received for 5\nI0824 23:34:59.336670 130 log.go:181] (0xc000e8a000) (5) Data frame handling\nI0824 23:34:59.336700 130 log.go:181] (0xc000e8a000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0824 23:34:59.336957 130 log.go:181] (0xc00003b810) Data frame received for 5\nI0824 23:34:59.336979 130 log.go:181] (0xc000e8a000) (5) Data frame handling\nI0824 23:34:59.337133 130 log.go:181] (0xc00003b810) Data frame received for 3\nI0824 23:34:59.337151 130 log.go:181] (0xc000644000) (3) Data frame handling\nI0824 23:34:59.338831 130 log.go:181] (0xc00003b810) Data frame received for 1\nI0824 23:34:59.338854 130 log.go:181] (0xc000158320) (1) Data frame handling\nI0824 23:34:59.338867 130 log.go:181] (0xc000158320) (1) Data frame sent\nI0824 23:34:59.338883 130 log.go:181] (0xc00003b810) (0xc000158320) Stream removed, broadcasting: 1\nI0824 23:34:59.339031 130 log.go:181] (0xc00003b810) Go away received\nI0824 23:34:59.339264 130 log.go:181] (0xc00003b810) (0xc000158320) Stream removed, broadcasting: 1\nI0824 23:34:59.339279 130 log.go:181] (0xc00003b810) (0xc000644000) Stream removed, broadcasting: 3\nI0824 23:34:59.339286 130 log.go:181] (0xc00003b810) (0xc000e8a000) Stream removed, broadcasting: 5\n" Aug 24 23:34:59.349: INFO: stdout: "" Aug 24 23:34:59.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodk45n5 -- /bin/sh -x -c nc -zv -t -w 2 10.98.102.168 80' Aug 24 23:34:59.562: INFO: stderr: "I0824 23:34:59.485669 148 log.go:181] (0xc000148fd0) (0xc000694fa0) Create stream\nI0824 23:34:59.485738 148 log.go:181] (0xc000148fd0) (0xc000694fa0) Stream added, broadcasting: 1\nI0824 23:34:59.493244 148 log.go:181] (0xc000148fd0) Reply frame received for 1\nI0824 23:34:59.493281 148 log.go:181] (0xc000148fd0) (0xc000c8e0a0) Create stream\nI0824 23:34:59.493290 148 log.go:181] (0xc000148fd0) (0xc000c8e0a0) Stream added, broadcasting: 3\nI0824 23:34:59.494250 148 log.go:181] (0xc000148fd0) Reply frame received for 3\nI0824 23:34:59.494294 148 log.go:181] (0xc000148fd0) (0xc000695720) Create stream\nI0824 23:34:59.494306 148 log.go:181] (0xc000148fd0) (0xc000695720) Stream added, broadcasting: 5\nI0824 23:34:59.495056 148 log.go:181] (0xc000148fd0) Reply frame received for 5\nI0824 23:34:59.550345 148 log.go:181] (0xc000148fd0) Data frame received for 3\nI0824 23:34:59.550372 148 log.go:181] (0xc000c8e0a0) (3) Data frame handling\nI0824 23:34:59.550485 148 log.go:181] (0xc000148fd0) Data frame received for 5\nI0824 23:34:59.550508 148 log.go:181] (0xc000695720) (5) Data frame handling\nI0824 23:34:59.550530 148 log.go:181] (0xc000695720) (5) Data frame sent\nI0824 23:34:59.550541 148 log.go:181] (0xc000148fd0) Data frame received for 5\nI0824 23:34:59.550546 148 log.go:181] (0xc000695720) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.102.168 80\nConnection to 10.98.102.168 80 port [tcp/http] succeeded!\nI0824 23:34:59.551897 148 log.go:181] (0xc000148fd0) Data frame received for 1\nI0824 23:34:59.551926 148 log.go:181] (0xc000694fa0) (1) Data frame handling\nI0824 23:34:59.551941 148 log.go:181] (0xc000694fa0) (1) Data frame sent\nI0824 23:34:59.551977 148 log.go:181] (0xc000148fd0) (0xc000694fa0) Stream removed, broadcasting: 1\nI0824 23:34:59.552023 148 log.go:181] (0xc000148fd0) Go away received\nI0824 23:34:59.552542 148 log.go:181] (0xc000148fd0) (0xc000694fa0) Stream removed, broadcasting: 1\nI0824 23:34:59.552564 148 log.go:181] (0xc000148fd0) (0xc000c8e0a0) Stream removed, broadcasting: 3\nI0824 23:34:59.552575 148 log.go:181] (0xc000148fd0) (0xc000695720) Stream removed, broadcasting: 5\n" Aug 24 23:34:59.562: INFO: stdout: "" Aug 24 23:34:59.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodk45n5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31593' Aug 24 23:34:59.792: INFO: stderr: "I0824 23:34:59.732974 166 log.go:181] (0xc0001dcfd0) (0xc000206640) Create stream\nI0824 23:34:59.733035 166 log.go:181] (0xc0001dcfd0) (0xc000206640) Stream added, broadcasting: 1\nI0824 23:34:59.739003 166 log.go:181] (0xc0001dcfd0) Reply frame received for 1\nI0824 23:34:59.739040 166 log.go:181] (0xc0001dcfd0) (0xc00091e500) Create stream\nI0824 23:34:59.739065 166 log.go:181] (0xc0001dcfd0) (0xc00091e500) Stream added, broadcasting: 3\nI0824 23:34:59.739999 166 log.go:181] (0xc0001dcfd0) Reply frame received for 3\nI0824 23:34:59.740042 166 log.go:181] (0xc0001dcfd0) (0xc000206f00) Create stream\nI0824 23:34:59.740055 166 log.go:181] (0xc0001dcfd0) (0xc000206f00) Stream added, broadcasting: 5\nI0824 23:34:59.741050 166 log.go:181] (0xc0001dcfd0) Reply frame received for 5\nI0824 23:34:59.785873 166 log.go:181] (0xc0001dcfd0) Data frame received for 5\nI0824 23:34:59.785904 166 log.go:181] (0xc000206f00) (5) Data frame handling\nI0824 23:34:59.785926 166 log.go:181] (0xc000206f00) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.11 31593\nConnection to 172.18.0.11 31593 port [tcp/31593] succeeded!\nI0824 23:34:59.786131 166 log.go:181] (0xc0001dcfd0) Data frame received for 5\nI0824 23:34:59.786147 166 log.go:181] (0xc000206f00) (5) Data frame handling\nI0824 23:34:59.786184 166 log.go:181] (0xc0001dcfd0) Data frame received for 3\nI0824 23:34:59.786202 166 log.go:181] (0xc00091e500) (3) Data frame handling\nI0824 23:34:59.787515 166 log.go:181] (0xc0001dcfd0) Data frame received for 1\nI0824 23:34:59.787538 166 log.go:181] (0xc000206640) (1) Data frame handling\nI0824 23:34:59.787566 166 log.go:181] (0xc000206640) (1) Data frame sent\nI0824 23:34:59.787582 166 log.go:181] (0xc0001dcfd0) (0xc000206640) Stream removed, broadcasting: 1\nI0824 23:34:59.787601 166 log.go:181] (0xc0001dcfd0) Go away received\nI0824 23:34:59.787969 166 log.go:181] (0xc0001dcfd0) (0xc000206640) Stream removed, broadcasting: 1\nI0824 23:34:59.787988 166 log.go:181] (0xc0001dcfd0) (0xc00091e500) Stream removed, broadcasting: 3\nI0824 23:34:59.787996 166 log.go:181] (0xc0001dcfd0) (0xc000206f00) Stream removed, broadcasting: 5\n" Aug 24 23:34:59.793: INFO: stdout: "" Aug 24 23:34:59.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2150 execpodk45n5 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31593' Aug 24 23:35:00.017: INFO: stderr: "I0824 23:34:59.931348 184 log.go:181] (0xc00003b970) (0xc000872aa0) Create stream\nI0824 23:34:59.931407 184 log.go:181] (0xc00003b970) (0xc000872aa0) Stream added, broadcasting: 1\nI0824 23:34:59.934231 184 log.go:181] (0xc00003b970) Reply frame received for 1\nI0824 23:34:59.934276 184 log.go:181] (0xc00003b970) (0xc0005c00a0) Create stream\nI0824 23:34:59.934293 184 log.go:181] (0xc00003b970) (0xc0005c00a0) Stream added, broadcasting: 3\nI0824 23:34:59.935184 184 log.go:181] (0xc00003b970) Reply frame received for 3\nI0824 23:34:59.935224 184 log.go:181] (0xc00003b970) (0xc0001ee780) Create stream\nI0824 23:34:59.935238 184 log.go:181] (0xc00003b970) (0xc0001ee780) Stream added, broadcasting: 5\nI0824 23:34:59.936191 184 log.go:181] (0xc00003b970) Reply frame received for 5\nI0824 23:35:00.006827 184 log.go:181] (0xc00003b970) Data frame received for 5\nI0824 23:35:00.006851 184 log.go:181] (0xc0001ee780) (5) Data frame handling\nI0824 23:35:00.006863 184 log.go:181] (0xc0001ee780) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31593\nConnection to 172.18.0.14 31593 port [tcp/31593] succeeded!\nI0824 23:35:00.007056 184 log.go:181] (0xc00003b970) Data frame received for 3\nI0824 23:35:00.007111 184 log.go:181] (0xc0005c00a0) (3) Data frame handling\nI0824 23:35:00.007151 184 log.go:181] (0xc00003b970) Data frame received for 5\nI0824 23:35:00.007174 184 log.go:181] (0xc0001ee780) (5) Data frame handling\nI0824 23:35:00.008192 184 log.go:181] (0xc00003b970) Data frame received for 1\nI0824 23:35:00.008209 184 log.go:181] (0xc000872aa0) (1) Data frame handling\nI0824 23:35:00.008218 184 log.go:181] (0xc000872aa0) (1) Data frame sent\nI0824 23:35:00.008229 184 log.go:181] (0xc00003b970) (0xc000872aa0) Stream removed, broadcasting: 1\nI0824 23:35:00.008253 184 log.go:181] (0xc00003b970) Go away received\nI0824 23:35:00.008585 184 log.go:181] (0xc00003b970) (0xc000872aa0) Stream removed, broadcasting: 1\nI0824 23:35:00.008600 184 log.go:181] (0xc00003b970) (0xc0005c00a0) Stream removed, broadcasting: 3\nI0824 23:35:00.008611 184 log.go:181] (0xc00003b970) (0xc0001ee780) Stream removed, broadcasting: 5\n" Aug 24 23:35:00.017: INFO: stdout: "" Aug 24 23:35:00.017: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:35:00.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2150" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:15.268 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":40,"skipped":683,"failed":0} SS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:35:00.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 24 23:35:00.982: INFO: starting watch STEP: patching STEP: updating Aug 24 23:35:01.034: INFO: waiting for watch events with expected annotations Aug 24 23:35:01.034: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:35:01.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-7603" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":41,"skipped":685,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:35:01.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-4297 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4297 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4297 Aug 24 23:35:01.597: INFO: Found 0 stateful pods, waiting for 1 Aug 24 23:35:11.601: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 24 23:35:11.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 24 23:35:11.857: INFO: stderr: "I0824 23:35:11.736189 200 log.go:181] (0xc00003a0b0) (0xc0009ba000) Create stream\nI0824 23:35:11.736240 200 log.go:181] (0xc00003a0b0) (0xc0009ba000) Stream added, broadcasting: 1\nI0824 23:35:11.737482 200 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0824 23:35:11.737521 200 log.go:181] (0xc00003a0b0) (0xc000b1a000) Create stream\nI0824 23:35:11.737533 200 log.go:181] (0xc00003a0b0) (0xc000b1a000) Stream added, broadcasting: 3\nI0824 23:35:11.738043 200 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0824 23:35:11.738066 200 log.go:181] (0xc00003a0b0) (0xc000376dc0) Create stream\nI0824 23:35:11.738079 200 log.go:181] (0xc00003a0b0) (0xc000376dc0) Stream added, broadcasting: 5\nI0824 23:35:11.738568 200 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0824 23:35:11.795542 200 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0824 23:35:11.795585 200 log.go:181] (0xc000376dc0) (5) Data frame handling\nI0824 23:35:11.795620 200 log.go:181] (0xc000376dc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0824 23:35:11.843612 200 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0824 23:35:11.843661 200 log.go:181] (0xc000b1a000) (3) Data frame handling\nI0824 23:35:11.843688 200 log.go:181] (0xc000b1a000) (3) Data frame sent\nI0824 23:35:11.843703 200 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0824 23:35:11.843749 200 log.go:181] (0xc000b1a000) (3) Data frame handling\nI0824 23:35:11.843812 200 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0824 23:35:11.843836 200 log.go:181] (0xc000376dc0) (5) Data frame handling\nI0824 23:35:11.846122 200 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0824 23:35:11.846142 200 log.go:181] (0xc0009ba000) (1) Data frame handling\nI0824 23:35:11.846153 200 log.go:181] (0xc0009ba000) (1) Data frame sent\nI0824 23:35:11.846164 200 log.go:181] (0xc00003a0b0) (0xc0009ba000) Stream removed, broadcasting: 1\nI0824 23:35:11.846183 200 log.go:181] (0xc00003a0b0) Go away received\nI0824 23:35:11.846530 200 log.go:181] (0xc00003a0b0) (0xc0009ba000) Stream removed, broadcasting: 1\nI0824 23:35:11.846554 200 log.go:181] (0xc00003a0b0) (0xc000b1a000) Stream removed, broadcasting: 3\nI0824 23:35:11.846566 200 log.go:181] (0xc00003a0b0) (0xc000376dc0) Stream removed, broadcasting: 5\n" Aug 24 23:35:11.857: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 24 23:35:11.857: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 24 23:35:11.861: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 24 23:35:21.886: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 24 23:35:21.886: INFO: Waiting for statefulset status.replicas updated to 0 Aug 24 23:35:21.967: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999925s Aug 24 23:35:22.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.954078675s Aug 24 23:35:23.981: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.949011029s Aug 24 23:35:24.985: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.940188695s Aug 24 23:35:26.035: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.936280295s Aug 24 23:35:27.040: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.885683022s Aug 24 23:35:28.044: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.881168695s Aug 24 23:35:29.049: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.877504252s Aug 24 23:35:30.053: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.872526424s Aug 24 23:35:31.058: INFO: Verifying statefulset ss doesn't scale past 1 for another 868.08192ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4297 Aug 24 23:35:32.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 24 23:35:32.315: INFO: stderr: "I0824 23:35:32.228907 218 log.go:181] (0xc00003b340) (0xc000d188c0) Create stream\nI0824 23:35:32.228971 218 log.go:181] (0xc00003b340) (0xc000d188c0) Stream added, broadcasting: 1\nI0824 23:35:32.234429 218 log.go:181] (0xc00003b340) Reply frame received for 1\nI0824 23:35:32.234477 218 log.go:181] (0xc00003b340) (0xc000d18000) Create stream\nI0824 23:35:32.234491 218 log.go:181] (0xc00003b340) (0xc000d18000) Stream added, broadcasting: 3\nI0824 23:35:32.235485 218 log.go:181] (0xc00003b340) Reply frame received for 3\nI0824 23:35:32.235537 218 log.go:181] (0xc00003b340) (0xc000d180a0) Create stream\nI0824 23:35:32.235552 218 log.go:181] (0xc00003b340) (0xc000d180a0) Stream added, broadcasting: 5\nI0824 23:35:32.236354 218 log.go:181] (0xc00003b340) Reply frame received for 5\nI0824 23:35:32.304001 218 log.go:181] (0xc00003b340) Data frame received for 5\nI0824 23:35:32.304064 218 log.go:181] (0xc00003b340) Data frame received for 3\nI0824 23:35:32.304101 218 log.go:181] (0xc000d18000) (3) Data frame handling\nI0824 23:35:32.304117 218 log.go:181] (0xc000d18000) (3) Data frame sent\nI0824 23:35:32.304140 218 log.go:181] (0xc000d180a0) (5) Data frame handling\nI0824 23:35:32.304173 218 log.go:181] (0xc000d180a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0824 23:35:32.304204 218 log.go:181] (0xc00003b340) Data frame received for 3\nI0824 23:35:32.304309 218 log.go:181] (0xc000d18000) (3) Data frame handling\nI0824 23:35:32.304345 218 log.go:181] (0xc00003b340) Data frame received for 5\nI0824 23:35:32.304370 218 log.go:181] (0xc000d180a0) (5) Data frame handling\nI0824 23:35:32.305758 218 log.go:181] (0xc00003b340) Data frame received for 1\nI0824 23:35:32.305786 218 log.go:181] (0xc000d188c0) (1) Data frame handling\nI0824 23:35:32.305806 218 log.go:181] (0xc000d188c0) (1) Data frame sent\nI0824 23:35:32.305827 218 log.go:181] (0xc00003b340) (0xc000d188c0) Stream removed, broadcasting: 1\nI0824 23:35:32.305863 218 log.go:181] (0xc00003b340) Go away received\nI0824 23:35:32.306338 218 log.go:181] (0xc00003b340) (0xc000d188c0) Stream removed, broadcasting: 1\nI0824 23:35:32.306359 218 log.go:181] (0xc00003b340) (0xc000d18000) Stream removed, broadcasting: 3\nI0824 23:35:32.306370 218 log.go:181] (0xc00003b340) (0xc000d180a0) Stream removed, broadcasting: 5\n" Aug 24 23:35:32.315: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 24 23:35:32.315: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 24 23:35:32.318: INFO: Found 1 stateful pods, waiting for 3 Aug 24 23:35:42.323: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:35:42.323: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 24 23:35:42.323: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 24 23:35:42.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 24 23:35:42.573: INFO: stderr: "I0824 23:35:42.488148 236 log.go:181] (0xc000e3d080) (0xc000f1aa00) Create stream\nI0824 23:35:42.488234 236 log.go:181] (0xc000e3d080) (0xc000f1aa00) Stream added, broadcasting: 1\nI0824 23:35:42.493781 236 log.go:181] (0xc000e3d080) Reply frame received for 1\nI0824 23:35:42.493817 236 log.go:181] (0xc000e3d080) (0xc000f1a000) Create stream\nI0824 23:35:42.493827 236 log.go:181] (0xc000e3d080) (0xc000f1a000) Stream added, broadcasting: 3\nI0824 23:35:42.494782 236 log.go:181] (0xc000e3d080) Reply frame received for 3\nI0824 23:35:42.494826 236 log.go:181] (0xc000e3d080) (0xc000a85ea0) Create stream\nI0824 23:35:42.494841 236 log.go:181] (0xc000e3d080) (0xc000a85ea0) Stream added, broadcasting: 5\nI0824 23:35:42.495792 236 log.go:181] (0xc000e3d080) Reply frame received for 5\nI0824 23:35:42.566738 236 log.go:181] (0xc000e3d080) Data frame received for 3\nI0824 23:35:42.566780 236 log.go:181] (0xc000f1a000) (3) Data frame handling\nI0824 23:35:42.566792 236 log.go:181] (0xc000f1a000) (3) Data frame sent\nI0824 23:35:42.566802 236 log.go:181] (0xc000e3d080) Data frame received for 3\nI0824 23:35:42.566810 236 log.go:181] (0xc000f1a000) (3) Data frame handling\nI0824 23:35:42.566842 236 log.go:181] (0xc000e3d080) Data frame received for 5\nI0824 23:35:42.566862 236 log.go:181] (0xc000a85ea0) (5) Data frame handling\nI0824 23:35:42.566878 236 log.go:181] (0xc000a85ea0) (5) Data frame sent\nI0824 23:35:42.566887 236 log.go:181] (0xc000e3d080) Data frame received for 5\nI0824 23:35:42.566894 236 log.go:181] (0xc000a85ea0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0824 23:35:42.568296 236 log.go:181] (0xc000e3d080) Data frame received for 1\nI0824 23:35:42.568324 236 log.go:181] (0xc000f1aa00) (1) Data frame handling\nI0824 23:35:42.568335 236 log.go:181] (0xc000f1aa00) (1) Data frame sent\nI0824 23:35:42.568345 236 log.go:181] (0xc000e3d080) (0xc000f1aa00) Stream removed, broadcasting: 1\nI0824 23:35:42.568355 236 log.go:181] (0xc000e3d080) Go away received\nI0824 23:35:42.568849 236 log.go:181] (0xc000e3d080) (0xc000f1aa00) Stream removed, broadcasting: 1\nI0824 23:35:42.568876 236 log.go:181] (0xc000e3d080) (0xc000f1a000) Stream removed, broadcasting: 3\nI0824 23:35:42.568888 236 log.go:181] (0xc000e3d080) (0xc000a85ea0) Stream removed, broadcasting: 5\n" Aug 24 23:35:42.573: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 24 23:35:42.573: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 24 23:35:42.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 24 23:35:42.836: INFO: stderr: "I0824 23:35:42.697577 254 log.go:181] (0xc000194000) (0xc0008fe780) Create stream\nI0824 23:35:42.697635 254 log.go:181] (0xc000194000) (0xc0008fe780) Stream added, broadcasting: 1\nI0824 23:35:42.699873 254 log.go:181] (0xc000194000) Reply frame received for 1\nI0824 23:35:42.699901 254 log.go:181] (0xc000194000) (0xc000dae1e0) Create stream\nI0824 23:35:42.699909 254 log.go:181] (0xc000194000) (0xc000dae1e0) Stream added, broadcasting: 3\nI0824 23:35:42.701018 254 log.go:181] (0xc000194000) Reply frame received for 3\nI0824 23:35:42.701064 254 log.go:181] (0xc000194000) (0xc0008fea00) Create stream\nI0824 23:35:42.701082 254 log.go:181] (0xc000194000) (0xc0008fea00) Stream added, broadcasting: 5\nI0824 23:35:42.702141 254 log.go:181] (0xc000194000) Reply frame received for 5\nI0824 23:35:42.782872 254 log.go:181] (0xc000194000) Data frame received for 5\nI0824 23:35:42.782909 254 log.go:181] (0xc0008fea00) (5) Data frame handling\nI0824 23:35:42.782934 254 log.go:181] (0xc0008fea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0824 23:35:42.827172 254 log.go:181] (0xc000194000) Data frame received for 3\nI0824 23:35:42.827204 254 log.go:181] (0xc000dae1e0) (3) Data frame handling\nI0824 23:35:42.827225 254 log.go:181] (0xc000dae1e0) (3) Data frame sent\nI0824 23:35:42.827507 254 log.go:181] (0xc000194000) Data frame received for 5\nI0824 23:35:42.827523 254 log.go:181] (0xc0008fea00) (5) Data frame handling\nI0824 23:35:42.827546 254 log.go:181] (0xc000194000) Data frame received for 3\nI0824 23:35:42.827576 254 log.go:181] (0xc000dae1e0) (3) Data frame handling\nI0824 23:35:42.829576 254 log.go:181] (0xc000194000) Data frame received for 1\nI0824 23:35:42.829593 254 log.go:181] (0xc0008fe780) (1) Data frame handling\nI0824 23:35:42.829604 254 log.go:181] (0xc0008fe780) (1) Data frame sent\nI0824 23:35:42.829687 254 log.go:181] (0xc000194000) (0xc0008fe780) Stream removed, broadcasting: 1\nI0824 23:35:42.829977 254 log.go:181] (0xc000194000) (0xc0008fe780) Stream removed, broadcasting: 1\nI0824 23:35:42.829992 254 log.go:181] (0xc000194000) (0xc000dae1e0) Stream removed, broadcasting: 3\nI0824 23:35:42.830049 254 log.go:181] (0xc000194000) Go away received\nI0824 23:35:42.830089 254 log.go:181] (0xc000194000) (0xc0008fea00) Stream removed, broadcasting: 5\n" Aug 24 23:35:42.836: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 24 23:35:42.836: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 24 23:35:42.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 24 23:35:43.146: INFO: stderr: "I0824 23:35:43.009759 272 log.go:181] (0xc000b61080) (0xc0004a8be0) Create stream\nI0824 23:35:43.009813 272 log.go:181] (0xc000b61080) (0xc0004a8be0) Stream added, broadcasting: 1\nI0824 23:35:43.015439 272 log.go:181] (0xc000b61080) Reply frame received for 1\nI0824 23:35:43.015492 272 log.go:181] (0xc000b61080) (0xc000b0e0a0) Create stream\nI0824 23:35:43.015510 272 log.go:181] (0xc000b61080) (0xc000b0e0a0) Stream added, broadcasting: 3\nI0824 23:35:43.016401 272 log.go:181] (0xc000b61080) Reply frame received for 3\nI0824 23:35:43.016438 272 log.go:181] (0xc000b61080) (0xc0004a9680) Create stream\nI0824 23:35:43.016539 272 log.go:181] (0xc000b61080) (0xc0004a9680) Stream added, broadcasting: 5\nI0824 23:35:43.017542 272 log.go:181] (0xc000b61080) Reply frame received for 5\nI0824 23:35:43.076233 272 log.go:181] (0xc000b61080) Data frame received for 5\nI0824 23:35:43.076255 272 log.go:181] (0xc0004a9680) (5) Data frame handling\nI0824 23:35:43.076267 272 log.go:181] (0xc0004a9680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0824 23:35:43.133510 272 log.go:181] (0xc000b61080) Data frame received for 3\nI0824 23:35:43.133554 272 log.go:181] (0xc000b0e0a0) (3) Data frame handling\nI0824 23:35:43.133568 272 log.go:181] (0xc000b0e0a0) (3) Data frame sent\nI0824 23:35:43.133576 272 log.go:181] (0xc000b61080) Data frame received for 3\nI0824 23:35:43.133583 272 log.go:181] (0xc000b0e0a0) (3) Data frame handling\nI0824 23:35:43.133609 272 log.go:181] (0xc000b61080) Data frame received for 5\nI0824 23:35:43.133617 272 log.go:181] (0xc0004a9680) (5) Data frame handling\nI0824 23:35:43.134934 272 log.go:181] (0xc000b61080) Data frame received for 1\nI0824 23:35:43.134960 272 log.go:181] (0xc0004a8be0) (1) Data frame handling\nI0824 23:35:43.134984 272 log.go:181] (0xc0004a8be0) (1) Data frame sent\nI0824 23:35:43.135015 272 log.go:181] (0xc000b61080) (0xc0004a8be0) Stream removed, broadcasting: 1\nI0824 23:35:43.135043 272 log.go:181] (0xc000b61080) Go away received\nI0824 23:35:43.135308 272 log.go:181] (0xc000b61080) (0xc0004a8be0) Stream removed, broadcasting: 1\nI0824 23:35:43.135321 272 log.go:181] (0xc000b61080) (0xc000b0e0a0) Stream removed, broadcasting: 3\nI0824 23:35:43.135327 272 log.go:181] (0xc000b61080) (0xc0004a9680) Stream removed, broadcasting: 5\n" Aug 24 23:35:43.146: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 24 23:35:43.146: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 24 23:35:43.146: INFO: Waiting for statefulset status.replicas updated to 0 Aug 24 23:35:43.192: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 24 23:35:53.201: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 24 23:35:53.201: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 24 23:35:53.201: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 24 23:35:53.222: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999466s Aug 24 23:35:54.228: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987158095s Aug 24 23:35:55.233: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981684207s Aug 24 23:35:56.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.976080135s Aug 24 23:35:57.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.965588839s Aug 24 23:35:58.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960566905s Aug 24 23:35:59.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.955115453s Aug 24 23:36:00.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.949920391s Aug 24 23:36:01.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945492878s Aug 24 23:36:02.273: INFO: Verifying statefulset ss doesn't scale past 3 for another 941.263038ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4297 Aug 24 23:36:03.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 24 23:36:03.522: INFO: stderr: "I0824 23:36:03.426343 290 log.go:181] (0xc0005ce000) (0xc00071a460) Create stream\nI0824 23:36:03.426393 290 log.go:181] (0xc0005ce000) (0xc00071a460) Stream added, broadcasting: 1\nI0824 23:36:03.428069 290 log.go:181] (0xc0005ce000) Reply frame received for 1\nI0824 23:36:03.428100 290 log.go:181] (0xc0005ce000) (0xc00071b900) Create stream\nI0824 23:36:03.428118 290 log.go:181] (0xc0005ce000) (0xc00071b900) Stream added, broadcasting: 3\nI0824 23:36:03.428964 290 log.go:181] (0xc0005ce000) Reply frame received for 3\nI0824 23:36:03.429005 290 log.go:181] (0xc0005ce000) (0xc0007ac000) Create stream\nI0824 23:36:03.429017 290 log.go:181] (0xc0005ce000) (0xc0007ac000) Stream added, broadcasting: 5\nI0824 23:36:03.429785 290 log.go:181] (0xc0005ce000) Reply frame received for 5\nI0824 23:36:03.511343 290 log.go:181] (0xc0005ce000) Data frame received for 3\nI0824 23:36:03.511395 290 log.go:181] (0xc00071b900) (3) Data frame handling\nI0824 23:36:03.511418 290 log.go:181] (0xc00071b900) (3) Data frame sent\nI0824 23:36:03.511435 290 log.go:181] (0xc0005ce000) Data frame received for 3\nI0824 23:36:03.511450 290 log.go:181] (0xc00071b900) (3) Data frame handling\nI0824 23:36:03.511484 290 log.go:181] (0xc0005ce000) Data frame received for 5\nI0824 23:36:03.511506 290 log.go:181] (0xc0007ac000) (5) Data frame handling\nI0824 23:36:03.511524 290 log.go:181] (0xc0007ac000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0824 23:36:03.511539 290 log.go:181] (0xc0005ce000) Data frame received for 5\nI0824 23:36:03.511572 290 log.go:181] (0xc0007ac000) (5) Data frame handling\nI0824 23:36:03.512842 290 log.go:181] (0xc0005ce000) Data frame received for 1\nI0824 23:36:03.512874 290 log.go:181] (0xc00071a460) (1) Data frame handling\nI0824 23:36:03.512932 290 log.go:181] (0xc00071a460) (1) Data frame sent\nI0824 23:36:03.512954 290 log.go:181] (0xc0005ce000) (0xc00071a460) Stream removed, broadcasting: 1\nI0824 23:36:03.512979 290 log.go:181] (0xc0005ce000) Go away received\nI0824 23:36:03.513280 290 log.go:181] (0xc0005ce000) (0xc00071a460) Stream removed, broadcasting: 1\nI0824 23:36:03.513299 290 log.go:181] (0xc0005ce000) (0xc00071b900) Stream removed, broadcasting: 3\nI0824 23:36:03.513312 290 log.go:181] (0xc0005ce000) (0xc0007ac000) Stream removed, broadcasting: 5\n" Aug 24 23:36:03.522: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 24 23:36:03.522: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 24 23:36:03.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 24 23:36:03.734: INFO: stderr: "I0824 23:36:03.655806 308 log.go:181] (0xc000fa1080) (0xc0003a6dc0) Create stream\nI0824 23:36:03.655873 308 log.go:181] (0xc000fa1080) (0xc0003a6dc0) Stream added, broadcasting: 1\nI0824 23:36:03.664318 308 log.go:181] (0xc000fa1080) Reply frame received for 1\nI0824 23:36:03.664370 308 log.go:181] (0xc000fa1080) (0xc0003a75e0) Create stream\nI0824 23:36:03.664382 308 log.go:181] (0xc000fa1080) (0xc0003a75e0) Stream added, broadcasting: 3\nI0824 23:36:03.665395 308 log.go:181] (0xc000fa1080) Reply frame received for 3\nI0824 23:36:03.665436 308 log.go:181] (0xc000fa1080) (0xc00072dea0) Create stream\nI0824 23:36:03.665444 308 log.go:181] (0xc000fa1080) (0xc00072dea0) Stream added, broadcasting: 5\nI0824 23:36:03.666249 308 log.go:181] (0xc000fa1080) Reply frame received for 5\nI0824 23:36:03.725120 308 log.go:181] (0xc000fa1080) Data frame received for 5\nI0824 23:36:03.725161 308 log.go:181] (0xc00072dea0) (5) Data frame handling\nI0824 23:36:03.725182 308 log.go:181] (0xc00072dea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0824 23:36:03.725221 308 log.go:181] (0xc000fa1080) Data frame received for 5\nI0824 23:36:03.725232 308 log.go:181] (0xc00072dea0) (5) Data frame handling\nI0824 23:36:03.725258 308 log.go:181] (0xc000fa1080) Data frame received for 3\nI0824 23:36:03.725272 308 log.go:181] (0xc0003a75e0) (3) Data frame handling\nI0824 23:36:03.725283 308 log.go:181] (0xc0003a75e0) (3) Data frame sent\nI0824 23:36:03.725304 308 log.go:181] (0xc000fa1080) Data frame received for 3\nI0824 23:36:03.725318 308 log.go:181] (0xc0003a75e0) (3) Data frame handling\nI0824 23:36:03.726868 308 log.go:181] (0xc000fa1080) Data frame received for 1\nI0824 23:36:03.726903 308 log.go:181] (0xc0003a6dc0) (1) Data frame handling\nI0824 23:36:03.726921 308 log.go:181] (0xc0003a6dc0) (1) Data frame sent\nI0824 23:36:03.726938 308 log.go:181] (0xc000fa1080) (0xc0003a6dc0) Stream removed, broadcasting: 1\nI0824 23:36:03.726957 308 log.go:181] (0xc000fa1080) Go away received\nI0824 23:36:03.727334 308 log.go:181] (0xc000fa1080) (0xc0003a6dc0) Stream removed, broadcasting: 1\nI0824 23:36:03.727359 308 log.go:181] (0xc000fa1080) (0xc0003a75e0) Stream removed, broadcasting: 3\nI0824 23:36:03.727370 308 log.go:181] (0xc000fa1080) (0xc00072dea0) Stream removed, broadcasting: 5\n" Aug 24 23:36:03.734: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 24 23:36:03.734: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 24 23:36:03.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4297 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 24 23:36:03.958: INFO: stderr: "I0824 23:36:03.884081 327 log.go:181] (0xc0006d7760) (0xc0006ceaa0) Create stream\nI0824 23:36:03.884126 327 log.go:181] (0xc0006d7760) (0xc0006ceaa0) Stream added, broadcasting: 1\nI0824 23:36:03.886478 327 log.go:181] (0xc0006d7760) Reply frame received for 1\nI0824 23:36:03.886501 327 log.go:181] (0xc0006d7760) (0xc000144460) Create stream\nI0824 23:36:03.886510 327 log.go:181] (0xc0006d7760) (0xc000144460) Stream added, broadcasting: 3\nI0824 23:36:03.887363 327 log.go:181] (0xc0006d7760) Reply frame received for 3\nI0824 23:36:03.887402 327 log.go:181] (0xc0006d7760) (0xc00053e5a0) Create stream\nI0824 23:36:03.887415 327 log.go:181] (0xc0006d7760) (0xc00053e5a0) Stream added, broadcasting: 5\nI0824 23:36:03.888245 327 log.go:181] (0xc0006d7760) Reply frame received for 5\nI0824 23:36:03.949710 327 log.go:181] (0xc0006d7760) Data frame received for 5\nI0824 23:36:03.949757 327 log.go:181] (0xc00053e5a0) (5) Data frame handling\nI0824 23:36:03.949772 327 log.go:181] (0xc00053e5a0) (5) Data frame sent\nI0824 23:36:03.949780 327 log.go:181] (0xc0006d7760) Data frame received for 5\nI0824 23:36:03.949787 327 log.go:181] (0xc00053e5a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0824 23:36:03.949814 327 log.go:181] (0xc0006d7760) Data frame received for 3\nI0824 23:36:03.949828 327 log.go:181] (0xc000144460) (3) Data frame handling\nI0824 23:36:03.949837 327 log.go:181] (0xc000144460) (3) Data frame sent\nI0824 23:36:03.949846 327 log.go:181] (0xc0006d7760) Data frame received for 3\nI0824 23:36:03.949858 327 log.go:181] (0xc000144460) (3) Data frame handling\nI0824 23:36:03.950923 327 log.go:181] (0xc0006d7760) Data frame received for 1\nI0824 23:36:03.950940 327 log.go:181] (0xc0006ceaa0) (1) Data frame handling\nI0824 23:36:03.950954 327 log.go:181] (0xc0006ceaa0) (1) Data frame sent\nI0824 23:36:03.950963 327 log.go:181] (0xc0006d7760) (0xc0006ceaa0) Stream removed, broadcasting: 1\nI0824 23:36:03.950975 327 log.go:181] (0xc0006d7760) Go away received\nI0824 23:36:03.951376 327 log.go:181] (0xc0006d7760) (0xc0006ceaa0) Stream removed, broadcasting: 1\nI0824 23:36:03.951401 327 log.go:181] (0xc0006d7760) (0xc000144460) Stream removed, broadcasting: 3\nI0824 23:36:03.951411 327 log.go:181] (0xc0006d7760) (0xc00053e5a0) Stream removed, broadcasting: 5\n" Aug 24 23:36:03.958: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 24 23:36:03.958: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 24 23:36:03.958: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 24 23:36:33.973: INFO: Deleting all statefulset in ns statefulset-4297 Aug 24 23:36:33.975: INFO: Scaling statefulset ss to 0 Aug 24 23:36:33.982: INFO: Waiting for statefulset status.replicas updated to 0 Aug 24 23:36:33.984: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:36:33.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4297" for this suite. • [SLOW TEST:92.744 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":42,"skipped":685,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:36:34.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 24 23:36:40.193: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2749 PodName:pod-sharedvolume-88e03a66-0b86-4e36-9ed5-03bff35ae3a1 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:36:40.193: INFO: >>> kubeConfig: /root/.kube/config I0824 23:36:40.225469 7 log.go:181] (0xc006ffc6e0) (0xc0016e3180) Create stream I0824 23:36:40.225507 7 log.go:181] (0xc006ffc6e0) (0xc0016e3180) Stream added, broadcasting: 1 I0824 23:36:40.226985 7 log.go:181] (0xc006ffc6e0) Reply frame received for 1 I0824 23:36:40.227020 7 log.go:181] (0xc006ffc6e0) (0xc0016e3220) Create stream I0824 23:36:40.227033 7 log.go:181] (0xc006ffc6e0) (0xc0016e3220) Stream added, broadcasting: 3 I0824 23:36:40.227772 7 log.go:181] (0xc006ffc6e0) Reply frame received for 3 I0824 23:36:40.227809 7 log.go:181] (0xc006ffc6e0) (0xc001c24aa0) Create stream I0824 23:36:40.227823 7 log.go:181] (0xc006ffc6e0) (0xc001c24aa0) Stream added, broadcasting: 5 I0824 23:36:40.228485 7 log.go:181] (0xc006ffc6e0) Reply frame received for 5 I0824 23:36:40.299698 7 log.go:181] (0xc006ffc6e0) Data frame received for 3 I0824 23:36:40.299762 7 log.go:181] (0xc0016e3220) (3) Data frame handling I0824 23:36:40.299802 7 log.go:181] (0xc0016e3220) (3) Data frame sent I0824 23:36:40.299830 7 log.go:181] (0xc006ffc6e0) Data frame received for 3 I0824 23:36:40.299860 7 log.go:181] (0xc0016e3220) (3) Data frame handling I0824 23:36:40.299900 7 log.go:181] (0xc006ffc6e0) Data frame received for 5 I0824 23:36:40.299930 7 log.go:181] (0xc001c24aa0) (5) Data frame handling I0824 23:36:40.301311 7 log.go:181] (0xc006ffc6e0) Data frame received for 1 I0824 23:36:40.301338 7 log.go:181] (0xc0016e3180) (1) Data frame handling I0824 23:36:40.301375 7 log.go:181] (0xc0016e3180) (1) Data frame sent I0824 23:36:40.301412 7 log.go:181] (0xc006ffc6e0) (0xc0016e3180) Stream removed, broadcasting: 1 I0824 23:36:40.301435 7 log.go:181] (0xc006ffc6e0) Go away received I0824 23:36:40.301534 7 log.go:181] (0xc006ffc6e0) (0xc0016e3180) Stream removed, broadcasting: 1 I0824 23:36:40.301580 7 log.go:181] (0xc006ffc6e0) (0xc0016e3220) Stream removed, broadcasting: 3 I0824 23:36:40.301610 7 log.go:181] (0xc006ffc6e0) (0xc001c24aa0) Stream removed, broadcasting: 5 Aug 24 23:36:40.301: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:36:40.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2749" for this suite. • [SLOW TEST:6.304 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":43,"skipped":693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:36:40.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-d5235476-0d3f-44a8-a5f8-92412cb84ac4 STEP: Creating a pod to test consume secrets Aug 24 23:36:40.393: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049" in namespace "projected-2911" to be "Succeeded or Failed" Aug 24 23:36:40.412: INFO: Pod "pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049": Phase="Pending", Reason="", readiness=false. Elapsed: 18.490087ms Aug 24 23:36:42.417: INFO: Pod "pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023346535s Aug 24 23:36:44.420: INFO: Pod "pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026985562s STEP: Saw pod success Aug 24 23:36:44.420: INFO: Pod "pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049" satisfied condition "Succeeded or Failed" Aug 24 23:36:44.423: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049 container projected-secret-volume-test: STEP: delete the pod Aug 24 23:36:44.474: INFO: Waiting for pod pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049 to disappear Aug 24 23:36:44.481: INFO: Pod pod-projected-secrets-e2f071c9-6d79-42e1-a7cd-084b4f449049 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:36:44.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2911" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":732,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:36:44.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a5e1392e-ffd0-453a-beb5-b6582416e758 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-a5e1392e-ffd0-453a-beb5-b6582416e758 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:38:09.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9468" for this suite. • [SLOW TEST:84.840 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":45,"skipped":735,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:38:09.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-cvtv STEP: Creating a pod to test atomic-volume-subpath Aug 24 23:38:09.518: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cvtv" in namespace "subpath-7663" to be "Succeeded or Failed" Aug 24 23:38:09.532: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.212102ms Aug 24 23:38:11.536: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017496215s Aug 24 23:38:13.539: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 4.021260525s Aug 24 23:38:15.541: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 6.02332875s Aug 24 23:38:17.545: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 8.027000296s Aug 24 23:38:19.549: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 10.03054085s Aug 24 23:38:21.552: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 12.033935132s Aug 24 23:38:23.556: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 14.037484865s Aug 24 23:38:25.559: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 16.04095788s Aug 24 23:38:27.562: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 18.044219627s Aug 24 23:38:29.566: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 20.047642481s Aug 24 23:38:31.570: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 22.051937712s Aug 24 23:38:33.574: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Running", Reason="", readiness=true. Elapsed: 24.056172448s Aug 24 23:38:35.579: INFO: Pod "pod-subpath-test-downwardapi-cvtv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.060465227s STEP: Saw pod success Aug 24 23:38:35.579: INFO: Pod "pod-subpath-test-downwardapi-cvtv" satisfied condition "Succeeded or Failed" Aug 24 23:38:35.582: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-cvtv container test-container-subpath-downwardapi-cvtv: STEP: delete the pod Aug 24 23:38:35.668: INFO: Waiting for pod pod-subpath-test-downwardapi-cvtv to disappear Aug 24 23:38:35.681: INFO: Pod pod-subpath-test-downwardapi-cvtv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-cvtv Aug 24 23:38:35.681: INFO: Deleting pod "pod-subpath-test-downwardapi-cvtv" in namespace "subpath-7663" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:38:35.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7663" for this suite. • [SLOW TEST:26.363 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":46,"skipped":737,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:38:35.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-be2c9faf-ba04-4128-af06-444aacbbcef0 STEP: Creating a pod to test consume configMaps Aug 24 23:38:35.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4" in namespace "configmap-15" to be "Succeeded or Failed" Aug 24 23:38:35.758: INFO: Pod "pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.313735ms Aug 24 23:38:37.763: INFO: Pod "pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007353109s Aug 24 23:38:39.767: INFO: Pod "pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011982683s STEP: Saw pod success Aug 24 23:38:39.767: INFO: Pod "pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4" satisfied condition "Succeeded or Failed" Aug 24 23:38:39.770: INFO: Trying to get logs from node latest-worker pod pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4 container configmap-volume-test: STEP: delete the pod Aug 24 23:38:39.842: INFO: Waiting for pod pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4 to disappear Aug 24 23:38:39.856: INFO: Pod pod-configmaps-88be4c9e-4676-4ef0-bafc-76574427d6c4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:38:39.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-15" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":47,"skipped":737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:38:39.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-bdbb446f-4f6b-458c-a174-85fab9a43f6d STEP: Creating a pod to test consume configMaps Aug 24 23:38:40.017: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494" in namespace "projected-364" to be "Succeeded or Failed" Aug 24 23:38:40.040: INFO: Pod "pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494": Phase="Pending", Reason="", readiness=false. Elapsed: 22.74264ms Aug 24 23:38:42.313: INFO: Pod "pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296628668s Aug 24 23:38:44.318: INFO: Pod "pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494": Phase="Pending", Reason="", readiness=false. Elapsed: 4.301537606s Aug 24 23:38:46.323: INFO: Pod "pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.305675658s STEP: Saw pod success Aug 24 23:38:46.323: INFO: Pod "pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494" satisfied condition "Succeeded or Failed" Aug 24 23:38:46.325: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494 container projected-configmap-volume-test: STEP: delete the pod Aug 24 23:38:46.368: INFO: Waiting for pod pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494 to disappear Aug 24 23:38:46.396: INFO: Pod pod-projected-configmaps-4bbdaf13-5ecb-4e57-916b-71e535ab0494 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:38:46.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-364" for this suite. • [SLOW TEST:6.541 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":770,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:38:46.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:38:46.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71" in namespace "downward-api-4688" to be "Succeeded or Failed" Aug 24 23:38:46.590: INFO: Pod "downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71": Phase="Pending", Reason="", readiness=false. Elapsed: 52.688091ms Aug 24 23:38:48.594: INFO: Pod "downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05675957s Aug 24 23:38:50.598: INFO: Pod "downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061288176s STEP: Saw pod success Aug 24 23:38:50.598: INFO: Pod "downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71" satisfied condition "Succeeded or Failed" Aug 24 23:38:50.601: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71 container client-container: STEP: delete the pod Aug 24 23:38:50.632: INFO: Waiting for pod downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71 to disappear Aug 24 23:38:50.639: INFO: Pod downwardapi-volume-7b283c43-ae75-42af-a0f1-d41ed7d41d71 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:38:50.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4688" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":49,"skipped":792,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:38:50.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:38:50.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71" in namespace "projected-6333" to be "Succeeded or Failed" Aug 24 23:38:50.732: INFO: Pod "downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71": Phase="Pending", Reason="", readiness=false. Elapsed: 11.51463ms Aug 24 23:38:52.737: INFO: Pod "downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015846507s Aug 24 23:38:54.740: INFO: Pod "downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71": Phase="Running", Reason="", readiness=true. Elapsed: 4.019230195s Aug 24 23:38:56.745: INFO: Pod "downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024024743s STEP: Saw pod success Aug 24 23:38:56.745: INFO: Pod "downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71" satisfied condition "Succeeded or Failed" Aug 24 23:38:56.748: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71 container client-container: STEP: delete the pod Aug 24 23:38:56.790: INFO: Waiting for pod downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71 to disappear Aug 24 23:38:56.802: INFO: Pod downwardapi-volume-dda9dccf-8d47-48f5-819b-95caf3dbbe71 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:38:56.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6333" for this suite. • [SLOW TEST:6.144 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":794,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:38:56.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8210/configmap-test-a8fabe54-69d7-4632-9573-73a185607d1e STEP: Creating a pod to test consume configMaps Aug 24 23:38:56.930: INFO: Waiting up to 5m0s for pod "pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce" in namespace "configmap-8210" to be "Succeeded or Failed" Aug 24 23:38:56.949: INFO: Pod "pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce": Phase="Pending", Reason="", readiness=false. Elapsed: 18.40757ms Aug 24 23:38:58.958: INFO: Pod "pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027001273s Aug 24 23:39:01.016: INFO: Pod "pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085328579s Aug 24 23:39:03.021: INFO: Pod "pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090743565s STEP: Saw pod success Aug 24 23:39:03.021: INFO: Pod "pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce" satisfied condition "Succeeded or Failed" Aug 24 23:39:03.025: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce container env-test: STEP: delete the pod Aug 24 23:39:03.106: INFO: Waiting for pod pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce to disappear Aug 24 23:39:03.114: INFO: Pod pod-configmaps-af47964a-ad8f-4bc1-aed2-4145645911ce no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:39:03.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8210" for this suite. • [SLOW TEST:6.311 seconds] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":51,"skipped":806,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:39:03.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-82ec1a63-6c11-4478-9091-74ed4369b657 STEP: Creating a pod to test consume configMaps Aug 24 23:39:03.308: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22" in namespace "configmap-7828" to be "Succeeded or Failed" Aug 24 23:39:03.383: INFO: Pod "pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22": Phase="Pending", Reason="", readiness=false. Elapsed: 75.048132ms Aug 24 23:39:05.475: INFO: Pod "pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167664959s Aug 24 23:39:07.535: INFO: Pod "pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.22725886s STEP: Saw pod success Aug 24 23:39:07.535: INFO: Pod "pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22" satisfied condition "Succeeded or Failed" Aug 24 23:39:07.538: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22 container configmap-volume-test: STEP: delete the pod Aug 24 23:39:07.578: INFO: Waiting for pod pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22 to disappear Aug 24 23:39:07.598: INFO: Pod pod-configmaps-0c811bd7-12a5-415b-8404-3ab8fbc3ed22 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:39:07.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7828" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":52,"skipped":826,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:39:07.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Aug 24 23:39:07.890: INFO: Waiting up to 5m0s for pod "client-containers-cc344713-0f6d-49f0-8307-4446d659bce7" in namespace "containers-9781" to be "Succeeded or Failed" Aug 24 23:39:07.925: INFO: Pod "client-containers-cc344713-0f6d-49f0-8307-4446d659bce7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.479901ms Aug 24 23:39:10.026: INFO: Pod "client-containers-cc344713-0f6d-49f0-8307-4446d659bce7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13552907s Aug 24 23:39:12.029: INFO: Pod "client-containers-cc344713-0f6d-49f0-8307-4446d659bce7": Phase="Running", Reason="", readiness=true. Elapsed: 4.138892384s Aug 24 23:39:14.041: INFO: Pod "client-containers-cc344713-0f6d-49f0-8307-4446d659bce7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.150418104s STEP: Saw pod success Aug 24 23:39:14.041: INFO: Pod "client-containers-cc344713-0f6d-49f0-8307-4446d659bce7" satisfied condition "Succeeded or Failed" Aug 24 23:39:14.043: INFO: Trying to get logs from node latest-worker pod client-containers-cc344713-0f6d-49f0-8307-4446d659bce7 container test-container: STEP: delete the pod Aug 24 23:39:14.079: INFO: Waiting for pod client-containers-cc344713-0f6d-49f0-8307-4446d659bce7 to disappear Aug 24 23:39:14.084: INFO: Pod client-containers-cc344713-0f6d-49f0-8307-4446d659bce7 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:39:14.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9781" for this suite. • [SLOW TEST:6.352 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":833,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:39:14.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Aug 24 23:39:18.223: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4911 PodName:var-expansion-96ecabe8-e1f3-4cb6-b25f-7a092e06c43b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:39:18.223: INFO: >>> kubeConfig: /root/.kube/config I0824 23:39:18.268035 7 log.go:181] (0xc007c6a2c0) (0xc001371220) Create stream I0824 23:39:18.268079 7 log.go:181] (0xc007c6a2c0) (0xc001371220) Stream added, broadcasting: 1 I0824 23:39:18.271128 7 log.go:181] (0xc007c6a2c0) Reply frame received for 1 I0824 23:39:18.271186 7 log.go:181] (0xc007c6a2c0) (0xc0033aa3c0) Create stream I0824 23:39:18.271210 7 log.go:181] (0xc007c6a2c0) (0xc0033aa3c0) Stream added, broadcasting: 3 I0824 23:39:18.272339 7 log.go:181] (0xc007c6a2c0) Reply frame received for 3 I0824 23:39:18.272410 7 log.go:181] (0xc007c6a2c0) (0xc003b06f00) Create stream I0824 23:39:18.272427 7 log.go:181] (0xc007c6a2c0) (0xc003b06f00) Stream added, broadcasting: 5 I0824 23:39:18.273681 7 log.go:181] (0xc007c6a2c0) Reply frame received for 5 I0824 23:39:18.353493 7 log.go:181] (0xc007c6a2c0) Data frame received for 3 I0824 23:39:18.353544 7 log.go:181] (0xc0033aa3c0) (3) Data frame handling I0824 23:39:18.353599 7 log.go:181] (0xc007c6a2c0) Data frame received for 5 I0824 23:39:18.353612 7 log.go:181] (0xc003b06f00) (5) Data frame handling I0824 23:39:18.354967 7 log.go:181] (0xc007c6a2c0) Data frame received for 1 I0824 23:39:18.355035 7 log.go:181] (0xc001371220) (1) Data frame handling I0824 23:39:18.355090 7 log.go:181] (0xc001371220) (1) Data frame sent I0824 23:39:18.355110 7 log.go:181] (0xc007c6a2c0) (0xc001371220) Stream removed, broadcasting: 1 I0824 23:39:18.355283 7 log.go:181] (0xc007c6a2c0) (0xc001371220) Stream removed, broadcasting: 1 I0824 23:39:18.355323 7 log.go:181] (0xc007c6a2c0) (0xc0033aa3c0) Stream removed, broadcasting: 3 I0824 23:39:18.355458 7 log.go:181] (0xc007c6a2c0) (0xc003b06f00) Stream removed, broadcasting: 5 STEP: test for file in mounted path Aug 24 23:39:18.360: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4911 PodName:var-expansion-96ecabe8-e1f3-4cb6-b25f-7a092e06c43b ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:39:18.360: INFO: >>> kubeConfig: /root/.kube/config I0824 23:39:18.386647 7 log.go:181] (0xc000851550) (0xc0033aa820) Create stream I0824 23:39:18.386674 7 log.go:181] (0xc000851550) (0xc0033aa820) Stream added, broadcasting: 1 I0824 23:39:18.389272 7 log.go:181] (0xc000851550) Reply frame received for 1 I0824 23:39:18.389290 7 log.go:181] (0xc000851550) (0xc0033aa8c0) Create stream I0824 23:39:18.389301 7 log.go:181] (0xc000851550) (0xc0033aa8c0) Stream added, broadcasting: 3 I0824 23:39:18.389924 7 log.go:181] (0xc000851550) Reply frame received for 3 I0824 23:39:18.389955 7 log.go:181] (0xc000851550) (0xc001371400) Create stream I0824 23:39:18.389969 7 log.go:181] (0xc000851550) (0xc001371400) Stream added, broadcasting: 5 I0824 23:39:18.390576 7 log.go:181] (0xc000851550) Reply frame received for 5 I0824 23:39:18.449576 7 log.go:181] (0xc000851550) Data frame received for 5 I0824 23:39:18.449607 7 log.go:181] (0xc001371400) (5) Data frame handling I0824 23:39:18.449624 7 log.go:181] (0xc000851550) Data frame received for 3 I0824 23:39:18.449632 7 log.go:181] (0xc0033aa8c0) (3) Data frame handling I0824 23:39:18.450648 7 log.go:181] (0xc000851550) Data frame received for 1 I0824 23:39:18.450662 7 log.go:181] (0xc0033aa820) (1) Data frame handling I0824 23:39:18.450677 7 log.go:181] (0xc0033aa820) (1) Data frame sent I0824 23:39:18.450745 7 log.go:181] (0xc000851550) (0xc0033aa820) Stream removed, broadcasting: 1 I0824 23:39:18.450795 7 log.go:181] (0xc000851550) Go away received I0824 23:39:18.450871 7 log.go:181] (0xc000851550) (0xc0033aa820) Stream removed, broadcasting: 1 I0824 23:39:18.450890 7 log.go:181] (0xc000851550) (0xc0033aa8c0) Stream removed, broadcasting: 3 I0824 23:39:18.450898 7 log.go:181] (0xc000851550) (0xc001371400) Stream removed, broadcasting: 5 STEP: updating the annotation value Aug 24 23:39:18.961: INFO: Successfully updated pod "var-expansion-96ecabe8-e1f3-4cb6-b25f-7a092e06c43b" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Aug 24 23:39:18.971: INFO: Deleting pod "var-expansion-96ecabe8-e1f3-4cb6-b25f-7a092e06c43b" in namespace "var-expansion-4911" Aug 24 23:39:18.975: INFO: Wait up to 5m0s for pod "var-expansion-96ecabe8-e1f3-4cb6-b25f-7a092e06c43b" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:40:00.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4911" for this suite. • [SLOW TEST:46.916 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":54,"skipped":853,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:40:01.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:40:01.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config version' Aug 24 23:40:01.279: INFO: stderr: "" Aug 24 23:40:01.279: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.4\", GitCommit:\"1afc53514032a44d091ae4a9f6e092171db9fe10\", GitTreeState:\"clean\", BuildDate:\"2020-08-04T14:29:10Z\", GoVersion:\"go1.15rc1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:40:01.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6055" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":55,"skipped":853,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:40:01.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-598f4a49-d9e6-4a9f-9158-6a660154c0b1 in namespace container-probe-7266 Aug 24 23:40:05.431: INFO: Started pod busybox-598f4a49-d9e6-4a9f-9158-6a660154c0b1 in namespace container-probe-7266 STEP: checking the pod's current state and verifying that restartCount is present Aug 24 23:40:05.435: INFO: Initial restart count of pod busybox-598f4a49-d9e6-4a9f-9158-6a660154c0b1 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:44:06.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7266" for this suite. • [SLOW TEST:245.053 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":873,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:44:06.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:44:06.432: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:44:07.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-271" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":57,"skipped":873,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:44:07.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-h8pxh in namespace proxy-148 I0824 23:44:07.280654 7 runners.go:190] Created replication controller with name: proxy-service-h8pxh, namespace: proxy-148, replica count: 1 I0824 23:44:08.331017 7 runners.go:190] proxy-service-h8pxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:44:09.333507 7 runners.go:190] proxy-service-h8pxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:44:10.333769 7 runners.go:190] proxy-service-h8pxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:44:11.333971 7 runners.go:190] proxy-service-h8pxh Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:44:12.334121 7 runners.go:190] proxy-service-h8pxh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0824 23:44:13.334476 7 runners.go:190] proxy-service-h8pxh Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0824 23:44:14.334698 7 runners.go:190] proxy-service-h8pxh Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 24 23:44:14.337: INFO: setup took 7.137713513s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 24 23:44:14.347: INFO: (0) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 9.464512ms) Aug 24 23:44:14.347: INFO: (0) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 9.445939ms) Aug 24 23:44:14.347: INFO: (0) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 9.880376ms) Aug 24 23:44:14.347: INFO: (0) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 9.839435ms) Aug 24 23:44:14.347: INFO: (0) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 9.932579ms) Aug 24 23:44:14.350: INFO: (0) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 12.483408ms) Aug 24 23:44:14.350: INFO: (0) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 12.635233ms) Aug 24 23:44:14.350: INFO: (0) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 12.568073ms) Aug 24 23:44:14.350: INFO: (0) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 12.639772ms) Aug 24 23:44:14.350: INFO: (0) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname1/proxy/: foo (200; 12.767434ms) Aug 24 23:44:14.355: INFO: (0) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: testtest (200; 4.779713ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 4.855006ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 5.169611ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname1/proxy/: foo (200; 5.498335ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 5.51126ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 5.557688ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 5.54188ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 5.653417ms) Aug 24 23:44:14.363: INFO: (1) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 5.621345ms) Aug 24 23:44:14.367: INFO: (2) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 3.312734ms) Aug 24 23:44:14.367: INFO: (2) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 6.397096ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 6.424588ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 6.482648ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 6.769875ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 6.781622ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 6.845483ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 6.799115ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 6.988417ms) Aug 24 23:44:14.370: INFO: (2) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: testtest (200; 4.916088ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 4.958802ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 4.976323ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 4.990329ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 5.084552ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 5.389985ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 5.473723ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 5.55935ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 5.532278ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 5.51004ms) Aug 24 23:44:14.376: INFO: (3) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: t... (200; 3.733249ms) Aug 24 23:44:14.380: INFO: (4) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname1/proxy/: foo (200; 3.852995ms) Aug 24 23:44:14.380: INFO: (4) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc/proxy/: test (200; 3.822909ms) Aug 24 23:44:14.381: INFO: (4) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 4.042927ms) Aug 24 23:44:14.381: INFO: (4) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 5.03418ms) Aug 24 23:44:14.387: INFO: (5) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 5.049936ms) Aug 24 23:44:14.387: INFO: (5) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 5.11507ms) Aug 24 23:44:14.387: INFO: (5) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtestt... (200; 9.821772ms) Aug 24 23:44:14.398: INFO: (6) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: test (200; 9.913033ms) Aug 24 23:44:14.398: INFO: (6) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 9.850379ms) Aug 24 23:44:14.398: INFO: (6) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 10.202754ms) Aug 24 23:44:14.398: INFO: (6) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 10.40322ms) Aug 24 23:44:14.398: INFO: (6) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 10.537024ms) Aug 24 23:44:14.398: INFO: (6) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 10.542697ms) Aug 24 23:44:14.398: INFO: (6) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 10.552101ms) Aug 24 23:44:14.399: INFO: (6) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 11.303515ms) Aug 24 23:44:14.403: INFO: (7) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 4.268361ms) Aug 24 23:44:14.403: INFO: (7) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 4.436367ms) Aug 24 23:44:14.403: INFO: (7) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 4.372519ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 4.384356ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname1/proxy/: foo (200; 4.934818ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 5.050049ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 4.967321ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 5.129732ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 5.377015ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 5.256386ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 5.291863ms) Aug 24 23:44:14.404: INFO: (7) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 5.320071ms) Aug 24 23:44:14.405: INFO: (7) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 5.418814ms) Aug 24 23:44:14.405: INFO: (7) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: test (200; 5.32926ms) Aug 24 23:44:14.410: INFO: (8) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 5.282426ms) Aug 24 23:44:14.410: INFO: (8) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testt... (200; 5.307505ms) Aug 24 23:44:14.410: INFO: (8) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 5.354107ms) Aug 24 23:44:14.414: INFO: (9) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 3.589973ms) Aug 24 23:44:14.414: INFO: (9) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 3.813384ms) Aug 24 23:44:14.414: INFO: (9) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 3.864768ms) Aug 24 23:44:14.414: INFO: (9) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 4.136686ms) Aug 24 23:44:14.414: INFO: (9) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 4.254095ms) Aug 24 23:44:14.414: INFO: (9) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 4.317832ms) Aug 24 23:44:14.415: INFO: (9) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 4.403704ms) Aug 24 23:44:14.415: INFO: (9) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: t... (200; 4.448849ms) Aug 24 23:44:14.417: INFO: (10) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 2.153051ms) Aug 24 23:44:14.419: INFO: (10) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: test (200; 4.781055ms) Aug 24 23:44:14.420: INFO: (10) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 4.835008ms) Aug 24 23:44:14.420: INFO: (10) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 13.602427ms) Aug 24 23:44:14.434: INFO: (11) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: t... (200; 14.120811ms) Aug 24 23:44:14.435: INFO: (11) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 14.220225ms) Aug 24 23:44:14.435: INFO: (11) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 14.219632ms) Aug 24 23:44:14.435: INFO: (11) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 14.664715ms) Aug 24 23:44:14.435: INFO: (11) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 14.735947ms) Aug 24 23:44:14.435: INFO: (11) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname1/proxy/: foo (200; 14.827481ms) Aug 24 23:44:14.435: INFO: (11) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 14.872237ms) Aug 24 23:44:14.437: INFO: (11) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 16.062492ms) Aug 24 23:44:14.437: INFO: (11) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtesttest (200; 5.079541ms) Aug 24 23:44:14.442: INFO: (12) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 5.135846ms) Aug 24 23:44:14.442: INFO: (12) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 5.204306ms) Aug 24 23:44:14.442: INFO: (12) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 5.228176ms) Aug 24 23:44:14.442: INFO: (12) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 5.250105ms) Aug 24 23:44:14.442: INFO: (12) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 5.263313ms) Aug 24 23:44:14.442: INFO: (12) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 5.28864ms) Aug 24 23:44:14.443: INFO: (12) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 5.405784ms) Aug 24 23:44:14.447: INFO: (13) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: test (200; 4.369474ms) Aug 24 23:44:14.448: INFO: (13) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 5.536198ms) Aug 24 23:44:14.449: INFO: (13) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 5.781472ms) Aug 24 23:44:14.449: INFO: (13) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 6.45757ms) Aug 24 23:44:14.449: INFO: (13) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 6.86721ms) Aug 24 23:44:14.449: INFO: (13) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 6.839895ms) Aug 24 23:44:14.449: INFO: (13) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 6.817359ms) Aug 24 23:44:14.450: INFO: (13) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testt... (200; 19.336976ms) Aug 24 23:44:14.466: INFO: (14) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 3.591471ms) Aug 24 23:44:14.466: INFO: (14) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 4.378516ms) Aug 24 23:44:14.466: INFO: (14) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname1/proxy/: foo (200; 4.408163ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 4.434277ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 4.536189ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 4.47109ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 4.463243ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: test (200; 4.551468ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 4.480852ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 4.507464ms) Aug 24 23:44:14.467: INFO: (14) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtestt... (200; 3.53137ms) Aug 24 23:44:14.470: INFO: (15) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc/proxy/: test (200; 3.55642ms) Aug 24 23:44:14.471: INFO: (15) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname1/proxy/: foo (200; 4.150767ms) Aug 24 23:44:14.471: INFO: (15) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 4.134082ms) Aug 24 23:44:14.472: INFO: (15) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 5.004947ms) Aug 24 23:44:14.472: INFO: (15) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 5.004416ms) Aug 24 23:44:14.472: INFO: (15) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 5.143331ms) Aug 24 23:44:14.472: INFO: (15) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 5.463584ms) Aug 24 23:44:14.472: INFO: (15) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 5.472026ms) Aug 24 23:44:14.472: INFO: (15) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: t... (200; 2.03818ms) Aug 24 23:44:14.475: INFO: (16) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 4.093591ms) Aug 24 23:44:14.477: INFO: (16) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 4.163977ms) Aug 24 23:44:14.477: INFO: (16) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 4.235784ms) Aug 24 23:44:14.477: INFO: (16) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 4.087228ms) Aug 24 23:44:14.477: INFO: (16) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 4.114843ms) Aug 24 23:44:14.477: INFO: (16) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 4.162474ms) Aug 24 23:44:14.477: INFO: (16) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:160/proxy/: foo (200; 4.22884ms) Aug 24 23:44:14.477: INFO: (16) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:460/proxy/: tls baz (200; 4.123404ms) Aug 24 23:44:14.480: INFO: (17) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc/proxy/: test (200; 2.883579ms) Aug 24 23:44:14.480: INFO: (17) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 3.525013ms) Aug 24 23:44:14.480: INFO: (17) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testt... (200; 3.19492ms) Aug 24 23:44:14.485: INFO: (18) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:1080/proxy/: testtest (200; 4.33606ms) Aug 24 23:44:14.486: INFO: (18) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 4.57893ms) Aug 24 23:44:14.486: INFO: (18) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname2/proxy/: bar (200; 4.576317ms) Aug 24 23:44:14.489: INFO: (19) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:462/proxy/: tls qux (200; 2.694055ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 3.487822ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc:162/proxy/: bar (200; 3.610723ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/services/proxy-service-h8pxh:portname1/proxy/: foo (200; 3.813863ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname2/proxy/: tls qux (200; 3.853731ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/services/https:proxy-service-h8pxh:tlsportname1/proxy/: tls baz (200; 3.797147ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/pods/http:proxy-service-h8pxh-hgmkc:1080/proxy/: t... (200; 3.822013ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/pods/proxy-service-h8pxh-hgmkc/proxy/: test (200; 4.004356ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/services/http:proxy-service-h8pxh:portname2/proxy/: bar (200; 4.01654ms) Aug 24 23:44:14.490: INFO: (19) /api/v1/namespaces/proxy-148/pods/https:proxy-service-h8pxh-hgmkc:443/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-4f3aaf17-37c2-4c2f-b69f-5dc13b377a86 STEP: Creating configMap with name cm-test-opt-upd-3ff3fb0a-de93-4c4e-acf3-8ca03bb20594 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4f3aaf17-37c2-4c2f-b69f-5dc13b377a86 STEP: Updating configmap cm-test-opt-upd-3ff3fb0a-de93-4c4e-acf3-8ca03bb20594 STEP: Creating configMap with name cm-test-opt-create-72456141-e349-4360-bf23-4363fa9db3e7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:45:37.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9443" for this suite. • [SLOW TEST:77.271 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":902,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:45:37.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:45:37.654: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:45:43.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6791" for this suite. • [SLOW TEST:6.470 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":907,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:45:43.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 24 23:45:49.349: INFO: Successfully updated pod "annotationupdate14efd24f-b906-41d2-aaf8-b83de04b43d0" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:45:51.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9744" for this suite. • [SLOW TEST:7.490 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":925,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:45:51.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2ae775ce-19a3-42b2-bcc6-a5aaf4b9ff5d STEP: Creating a pod to test consume secrets Aug 24 23:45:51.525: INFO: Waiting up to 5m0s for pod "pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05" in namespace "secrets-5688" to be "Succeeded or Failed" Aug 24 23:45:51.591: INFO: Pod "pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05": Phase="Pending", Reason="", readiness=false. Elapsed: 65.664052ms Aug 24 23:45:53.595: INFO: Pod "pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069231449s Aug 24 23:45:55.813: INFO: Pod "pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287731744s Aug 24 23:45:57.878: INFO: Pod "pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.352353114s STEP: Saw pod success Aug 24 23:45:57.878: INFO: Pod "pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05" satisfied condition "Succeeded or Failed" Aug 24 23:45:57.881: INFO: Trying to get logs from node latest-worker pod pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05 container secret-volume-test: STEP: delete the pod Aug 24 23:45:58.098: INFO: Waiting for pod pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05 to disappear Aug 24 23:45:58.147: INFO: Pod pod-secrets-0cee5f01-c534-4e33-921b-b3d8e4e2aa05 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:45:58.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5688" for this suite. • [SLOW TEST:6.764 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":937,"failed":0} SSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:45:58.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:45:58.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4681" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":63,"skipped":943,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:45:58.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-0cb2ac64-0ec8-420c-8e72-ad19db2c9ae2 STEP: Creating a pod to test consume configMaps Aug 24 23:45:58.839: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f" in namespace "projected-4143" to be "Succeeded or Failed" Aug 24 23:45:58.856: INFO: Pod "pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.080627ms Aug 24 23:46:01.178: INFO: Pod "pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338810103s Aug 24 23:46:03.181: INFO: Pod "pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341508433s Aug 24 23:46:05.229: INFO: Pod "pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.389716049s STEP: Saw pod success Aug 24 23:46:05.229: INFO: Pod "pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f" satisfied condition "Succeeded or Failed" Aug 24 23:46:05.232: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f container projected-configmap-volume-test: STEP: delete the pod Aug 24 23:46:05.255: INFO: Waiting for pod pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f to disappear Aug 24 23:46:05.285: INFO: Pod pod-projected-configmaps-b4a1ef7d-56dd-4fc6-93fb-62c14236335f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:46:05.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4143" for this suite. • [SLOW TEST:6.552 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":955,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:46:05.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 24 23:46:05.677: INFO: Waiting up to 5m0s for pod "pod-29f19dc5-b245-4bcd-b98a-94b0b4784099" in namespace "emptydir-3891" to be "Succeeded or Failed" Aug 24 23:46:05.875: INFO: Pod "pod-29f19dc5-b245-4bcd-b98a-94b0b4784099": Phase="Pending", Reason="", readiness=false. Elapsed: 198.605533ms Aug 24 23:46:07.880: INFO: Pod "pod-29f19dc5-b245-4bcd-b98a-94b0b4784099": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203204735s Aug 24 23:46:09.885: INFO: Pod "pod-29f19dc5-b245-4bcd-b98a-94b0b4784099": Phase="Running", Reason="", readiness=true. Elapsed: 4.20808071s Aug 24 23:46:11.889: INFO: Pod "pod-29f19dc5-b245-4bcd-b98a-94b0b4784099": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.212173616s STEP: Saw pod success Aug 24 23:46:11.889: INFO: Pod "pod-29f19dc5-b245-4bcd-b98a-94b0b4784099" satisfied condition "Succeeded or Failed" Aug 24 23:46:11.891: INFO: Trying to get logs from node latest-worker pod pod-29f19dc5-b245-4bcd-b98a-94b0b4784099 container test-container: STEP: delete the pod Aug 24 23:46:11.933: INFO: Waiting for pod pod-29f19dc5-b245-4bcd-b98a-94b0b4784099 to disappear Aug 24 23:46:11.949: INFO: Pod pod-29f19dc5-b245-4bcd-b98a-94b0b4784099 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:46:11.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3891" for this suite. • [SLOW TEST:6.651 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":65,"skipped":977,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:46:11.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 24 23:46:12.057: INFO: Waiting up to 1m0s for all nodes to be ready Aug 24 23:47:12.082: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:47:12.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Aug 24 23:47:16.233: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:47:32.475: INFO: pods created so far: [1 1 1] Aug 24 23:47:32.475: INFO: length of pods created so far: 3 Aug 24 23:47:44.483: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:47:51.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1404" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:47:51.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-335" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:99.755 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":66,"skipped":978,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:47:51.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 24 23:47:58.495: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:47:59.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7933" for this suite. • [SLOW TEST:7.739 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":67,"skipped":978,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:47:59.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4234.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4234.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4234.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4234.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4234.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4234.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 24 23:48:10.208: INFO: DNS probes using dns-4234/dns-test-0e8aab68-a616-40bb-8af7-81c0df2b8a52 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:48:10.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4234" for this suite. • [SLOW TEST:10.937 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":68,"skipped":980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:48:10.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:48:10.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c" in namespace "projected-6421" to be "Succeeded or Failed" Aug 24 23:48:10.503: INFO: Pod "downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.340057ms Aug 24 23:48:12.507: INFO: Pod "downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007111433s Aug 24 23:48:14.511: INFO: Pod "downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011019465s Aug 24 23:48:16.515: INFO: Pod "downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015348254s STEP: Saw pod success Aug 24 23:48:16.515: INFO: Pod "downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c" satisfied condition "Succeeded or Failed" Aug 24 23:48:16.518: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c container client-container: STEP: delete the pod Aug 24 23:48:16.584: INFO: Waiting for pod downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c to disappear Aug 24 23:48:16.596: INFO: Pod downwardapi-volume-80e51a83-a9d0-4be3-a5ac-ae728b03770c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:48:16.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6421" for this suite. • [SLOW TEST:6.213 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":69,"skipped":1007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:48:16.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:48:21.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9398" for this suite. • [SLOW TEST:5.377 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":70,"skipped":1039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:48:21.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-5mv4 STEP: Creating a pod to test atomic-volume-subpath Aug 24 23:48:22.167: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5mv4" in namespace "subpath-8060" to be "Succeeded or Failed" Aug 24 23:48:22.305: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Pending", Reason="", readiness=false. Elapsed: 138.18736ms Aug 24 23:48:24.373: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20607675s Aug 24 23:48:26.378: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2103173s Aug 24 23:48:28.497: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 6.329648917s Aug 24 23:48:30.502: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 8.33441585s Aug 24 23:48:32.506: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 10.33898528s Aug 24 23:48:34.511: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 12.343356212s Aug 24 23:48:36.515: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 14.34776126s Aug 24 23:48:38.522: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 16.354473147s Aug 24 23:48:40.526: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 18.359030572s Aug 24 23:48:42.531: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 20.363360091s Aug 24 23:48:44.534: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 22.366684358s Aug 24 23:48:46.556: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Running", Reason="", readiness=true. Elapsed: 24.389125643s Aug 24 23:48:48.562: INFO: Pod "pod-subpath-test-configmap-5mv4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.394831475s STEP: Saw pod success Aug 24 23:48:48.562: INFO: Pod "pod-subpath-test-configmap-5mv4" satisfied condition "Succeeded or Failed" Aug 24 23:48:48.565: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-5mv4 container test-container-subpath-configmap-5mv4: STEP: delete the pod Aug 24 23:48:48.620: INFO: Waiting for pod pod-subpath-test-configmap-5mv4 to disappear Aug 24 23:48:48.674: INFO: Pod pod-subpath-test-configmap-5mv4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-5mv4 Aug 24 23:48:48.674: INFO: Deleting pod "pod-subpath-test-configmap-5mv4" in namespace "subpath-8060" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:48:48.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8060" for this suite. • [SLOW TEST:26.702 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":71,"skipped":1118,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:48:48.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Aug 24 23:48:49.066: INFO: Waiting up to 5m0s for pod "client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb" in namespace "containers-3937" to be "Succeeded or Failed" Aug 24 23:48:49.167: INFO: Pod "client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb": Phase="Pending", Reason="", readiness=false. Elapsed: 100.770979ms Aug 24 23:48:51.274: INFO: Pod "client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207948351s Aug 24 23:48:53.278: INFO: Pod "client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.211645179s Aug 24 23:48:55.282: INFO: Pod "client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215647422s STEP: Saw pod success Aug 24 23:48:55.282: INFO: Pod "client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb" satisfied condition "Succeeded or Failed" Aug 24 23:48:55.284: INFO: Trying to get logs from node latest-worker pod client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb container test-container: STEP: delete the pod Aug 24 23:48:55.552: INFO: Waiting for pod client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb to disappear Aug 24 23:48:55.680: INFO: Pod client-containers-c47fcaa9-3489-4ad3-bd1a-8540bf2774eb no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:48:55.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3937" for this suite. • [SLOW TEST:7.002 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:48:55.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:48:56.618: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-1bf6324f-18f7-490a-8b4e-8fb92831aad4" in namespace "security-context-test-8120" to be "Succeeded or Failed" Aug 24 23:48:57.164: INFO: Pod "busybox-readonly-false-1bf6324f-18f7-490a-8b4e-8fb92831aad4": Phase="Pending", Reason="", readiness=false. Elapsed: 545.816819ms Aug 24 23:48:59.167: INFO: Pod "busybox-readonly-false-1bf6324f-18f7-490a-8b4e-8fb92831aad4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.549458545s Aug 24 23:49:01.252: INFO: Pod "busybox-readonly-false-1bf6324f-18f7-490a-8b4e-8fb92831aad4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.634058551s Aug 24 23:49:03.255: INFO: Pod "busybox-readonly-false-1bf6324f-18f7-490a-8b4e-8fb92831aad4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.637466962s Aug 24 23:49:03.255: INFO: Pod "busybox-readonly-false-1bf6324f-18f7-490a-8b4e-8fb92831aad4" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:49:03.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8120" for this suite. • [SLOW TEST:7.576 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":73,"skipped":1180,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:49:03.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:49:03.423: INFO: Waiting up to 5m0s for pod "downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed" in namespace "projected-5501" to be "Succeeded or Failed" Aug 24 23:49:03.427: INFO: Pod "downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.41782ms Aug 24 23:49:05.431: INFO: Pod "downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007560457s Aug 24 23:49:07.467: INFO: Pod "downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043747665s Aug 24 23:49:09.539: INFO: Pod "downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11552608s STEP: Saw pod success Aug 24 23:49:09.539: INFO: Pod "downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed" satisfied condition "Succeeded or Failed" Aug 24 23:49:09.541: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed container client-container: STEP: delete the pod Aug 24 23:49:09.604: INFO: Waiting for pod downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed to disappear Aug 24 23:49:09.688: INFO: Pod downwardapi-volume-966fc9d0-57f0-46dd-a7a0-e17879e76bed no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:49:09.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5501" for this suite. • [SLOW TEST:6.432 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1184,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:49:09.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3628 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3628 STEP: Creating statefulset with conflicting port in namespace statefulset-3628 STEP: Waiting until pod test-pod will start running in namespace statefulset-3628 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3628 Aug 24 23:49:16.002: INFO: Observed stateful pod in namespace: statefulset-3628, name: ss-0, uid: 5fb4aa5b-6509-4518-a08a-4ba8634e2a15, status phase: Pending. Waiting for statefulset controller to delete. Aug 24 23:49:16.494: INFO: Observed stateful pod in namespace: statefulset-3628, name: ss-0, uid: 5fb4aa5b-6509-4518-a08a-4ba8634e2a15, status phase: Failed. Waiting for statefulset controller to delete. Aug 24 23:49:16.509: INFO: Observed stateful pod in namespace: statefulset-3628, name: ss-0, uid: 5fb4aa5b-6509-4518-a08a-4ba8634e2a15, status phase: Failed. Waiting for statefulset controller to delete. Aug 24 23:49:16.531: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3628 STEP: Removing pod with conflicting port in namespace statefulset-3628 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3628 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 24 23:49:22.891: INFO: Deleting all statefulset in ns statefulset-3628 Aug 24 23:49:22.894: INFO: Scaling statefulset ss to 0 Aug 24 23:49:32.918: INFO: Waiting for statefulset status.replicas updated to 0 Aug 24 23:49:32.921: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:49:32.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3628" for this suite. • [SLOW TEST:23.287 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":75,"skipped":1196,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:49:32.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 24 23:49:33.127: INFO: Waiting up to 5m0s for pod "pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2" in namespace "emptydir-7100" to be "Succeeded or Failed" Aug 24 23:49:33.156: INFO: Pod "pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.645015ms Aug 24 23:49:35.233: INFO: Pod "pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106146566s Aug 24 23:49:37.237: INFO: Pod "pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110041997s Aug 24 23:49:39.300: INFO: Pod "pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.172348924s STEP: Saw pod success Aug 24 23:49:39.300: INFO: Pod "pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2" satisfied condition "Succeeded or Failed" Aug 24 23:49:39.335: INFO: Trying to get logs from node latest-worker pod pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2 container test-container: STEP: delete the pod Aug 24 23:49:39.473: INFO: Waiting for pod pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2 to disappear Aug 24 23:49:39.485: INFO: Pod pod-b38ab5e6-2d82-4931-9150-bcb06bc39ae2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:49:39.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7100" for this suite. • [SLOW TEST:6.533 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":76,"skipped":1209,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:49:39.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:49:40.629: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:49:42.638: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:49:44.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909780, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:49:47.776: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 24 23:49:52.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config attach --namespace=webhook-6372 to-be-attached-pod -i -c=container1' Aug 24 23:49:57.359: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:49:57.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6372" for this suite. STEP: Destroying namespace "webhook-6372-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.032 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":77,"skipped":1215,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:49:57.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 24 23:49:57.595: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 24 23:49:57.659: INFO: Waiting for terminating namespaces to be deleted... Aug 24 23:49:57.663: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 24 23:49:57.667: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.667: INFO: Container app ready: true, restart count 0 Aug 24 23:49:57.667: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.667: INFO: Container kindnet-cni ready: true, restart count 1 Aug 24 23:49:57.667: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.667: INFO: Container kube-proxy ready: true, restart count 0 Aug 24 23:49:57.667: INFO: to-be-attached-pod from webhook-6372 started at 2020-08-24 23:49:47 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.667: INFO: Container container1 ready: true, restart count 0 Aug 24 23:49:57.667: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 24 23:49:57.672: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.672: INFO: Container app ready: true, restart count 0 Aug 24 23:49:57.672: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.672: INFO: Container kindnet-cni ready: true, restart count 1 Aug 24 23:49:57.672: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.672: INFO: Container kube-proxy ready: true, restart count 0 Aug 24 23:49:57.672: INFO: sample-webhook-deployment-cbccbf6bb-sl8md from webhook-6372 started at 2020-08-24 23:49:40 +0000 UTC (1 container statuses recorded) Aug 24 23:49:57.672: INFO: Container sample-webhook ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Aug 24 23:49:57.792: INFO: Pod daemon-set-64t9w requesting resource cpu=0m on Node latest-worker Aug 24 23:49:57.793: INFO: Pod daemon-set-jxhg7 requesting resource cpu=0m on Node latest-worker2 Aug 24 23:49:57.793: INFO: Pod kindnet-gmpqb requesting resource cpu=100m on Node latest-worker Aug 24 23:49:57.793: INFO: Pod kindnet-grzzh requesting resource cpu=100m on Node latest-worker2 Aug 24 23:49:57.793: INFO: Pod kube-proxy-82wrf requesting resource cpu=0m on Node latest-worker Aug 24 23:49:57.793: INFO: Pod kube-proxy-fjk8r requesting resource cpu=0m on Node latest-worker2 Aug 24 23:49:57.793: INFO: Pod sample-webhook-deployment-cbccbf6bb-sl8md requesting resource cpu=0m on Node latest-worker2 Aug 24 23:49:57.793: INFO: Pod to-be-attached-pod requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Aug 24 23:49:57.793: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Aug 24 23:49:57.863: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-dda706b9-845f-4d3b-88bb-c969a00f0a39.162e59341d4b97ac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1088/filler-pod-dda706b9-845f-4d3b-88bb-c969a00f0a39 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e47bf930-0f99-4271-a390-6190c48ea0d0.162e593411d7e49a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1088/filler-pod-e47bf930-0f99-4271-a390-6190c48ea0d0 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e47bf930-0f99-4271-a390-6190c48ea0d0.162e5934d35a82a2], Reason = [Created], Message = [Created container filler-pod-e47bf930-0f99-4271-a390-6190c48ea0d0] STEP: Considering event: Type = [Normal], Name = [filler-pod-dda706b9-845f-4d3b-88bb-c969a00f0a39.162e59348e09c112], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e47bf930-0f99-4271-a390-6190c48ea0d0.162e59346faf0a09], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e47bf930-0f99-4271-a390-6190c48ea0d0.162e5934fde7f909], Reason = [Started], Message = [Started container filler-pod-e47bf930-0f99-4271-a390-6190c48ea0d0] STEP: Considering event: Type = [Normal], Name = [filler-pod-dda706b9-845f-4d3b-88bb-c969a00f0a39.162e593512ac8c86], Reason = [Started], Message = [Started container filler-pod-dda706b9-845f-4d3b-88bb-c969a00f0a39] STEP: Considering event: Type = [Normal], Name = [filler-pod-dda706b9-845f-4d3b-88bb-c969a00f0a39.162e593504bee25e], Reason = [Created], Message = [Created container filler-pod-dda706b9-845f-4d3b-88bb-c969a00f0a39] STEP: Considering event: Type = [Warning], Name = [additional-pod.162e593584d460ab], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.162e593588fdabb1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:50:05.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1088" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.750 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":78,"skipped":1234,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:50:05.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:50:06.318: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:50:08.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:50:10.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:50:12.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733909806, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:50:15.373: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:50:16.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3391" for this suite. STEP: Destroying namespace "webhook-3391-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.934 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":79,"skipped":1240,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:50:16.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:50:16.447: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-dc905ce7-fcf1-424f-986e-ebaf8260ac40" in namespace "security-context-test-6340" to be "Succeeded or Failed" Aug 24 23:50:16.457: INFO: Pod "alpine-nnp-false-dc905ce7-fcf1-424f-986e-ebaf8260ac40": Phase="Pending", Reason="", readiness=false. Elapsed: 10.351207ms Aug 24 23:50:18.521: INFO: Pod "alpine-nnp-false-dc905ce7-fcf1-424f-986e-ebaf8260ac40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074379625s Aug 24 23:50:20.525: INFO: Pod "alpine-nnp-false-dc905ce7-fcf1-424f-986e-ebaf8260ac40": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07792763s Aug 24 23:50:22.630: INFO: Pod "alpine-nnp-false-dc905ce7-fcf1-424f-986e-ebaf8260ac40": Phase="Running", Reason="", readiness=true. Elapsed: 6.183775698s Aug 24 23:50:24.888: INFO: Pod "alpine-nnp-false-dc905ce7-fcf1-424f-986e-ebaf8260ac40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.441035544s Aug 24 23:50:24.888: INFO: Pod "alpine-nnp-false-dc905ce7-fcf1-424f-986e-ebaf8260ac40" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:50:24.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6340" for this suite. • [SLOW TEST:8.859 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:50:25.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:50:55.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7123" for this suite. • [SLOW TEST:30.565 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":81,"skipped":1281,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:50:55.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:50:55.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1844" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:50:55.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 24 23:51:02.628: INFO: Successfully updated pod "labelsupdate5a3cefba-1e25-4daa-aab6-ed5bdfae1d07" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:51:04.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7930" for this suite. • [SLOW TEST:9.069 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1350,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:51:05.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 24 23:51:05.112: INFO: Waiting up to 5m0s for pod "downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8" in namespace "downward-api-9765" to be "Succeeded or Failed" Aug 24 23:51:05.127: INFO: Pod "downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8": Phase="Pending", Reason="", readiness=false. Elapsed: 14.148029ms Aug 24 23:51:07.130: INFO: Pod "downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017743386s Aug 24 23:51:09.294: INFO: Pod "downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182060615s STEP: Saw pod success Aug 24 23:51:09.295: INFO: Pod "downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8" satisfied condition "Succeeded or Failed" Aug 24 23:51:09.298: INFO: Trying to get logs from node latest-worker pod downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8 container dapi-container: STEP: delete the pod Aug 24 23:51:09.353: INFO: Waiting for pod downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8 to disappear Aug 24 23:51:09.773: INFO: Pod downward-api-30220d39-35c2-4d15-a18d-ec1358c202c8 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:51:09.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9765" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":84,"skipped":1353,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:51:09.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-8632 STEP: creating replication controller nodeport-test in namespace services-8632 I0824 23:51:10.662693 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8632, replica count: 2 I0824 23:51:13.712925 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:51:16.713293 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:51:19.713539 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 24 23:51:19.713: INFO: Creating new exec pod Aug 24 23:51:28.878: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8632 execpodffg6h -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 24 23:51:29.109: INFO: stderr: "I0824 23:51:29.028432 379 log.go:181] (0xc000125600) (0xc000b9d220) Create stream\nI0824 23:51:29.028496 379 log.go:181] (0xc000125600) (0xc000b9d220) Stream added, broadcasting: 1\nI0824 23:51:29.031201 379 log.go:181] (0xc000125600) Reply frame received for 1\nI0824 23:51:29.031241 379 log.go:181] (0xc000125600) (0xc000bc4500) Create stream\nI0824 23:51:29.031255 379 log.go:181] (0xc000125600) (0xc000bc4500) Stream added, broadcasting: 3\nI0824 23:51:29.032247 379 log.go:181] (0xc000125600) Reply frame received for 3\nI0824 23:51:29.032293 379 log.go:181] (0xc000125600) (0xc000742000) Create stream\nI0824 23:51:29.032314 379 log.go:181] (0xc000125600) (0xc000742000) Stream added, broadcasting: 5\nI0824 23:51:29.033330 379 log.go:181] (0xc000125600) Reply frame received for 5\nI0824 23:51:29.096924 379 log.go:181] (0xc000125600) Data frame received for 3\nI0824 23:51:29.096950 379 log.go:181] (0xc000bc4500) (3) Data frame handling\nI0824 23:51:29.096979 379 log.go:181] (0xc000125600) Data frame received for 5\nI0824 23:51:29.096990 379 log.go:181] (0xc000742000) (5) Data frame handling\nI0824 23:51:29.097046 379 log.go:181] (0xc000742000) (5) Data frame sent\nI0824 23:51:29.097064 379 log.go:181] (0xc000125600) Data frame received for 5\nI0824 23:51:29.097072 379 log.go:181] (0xc000742000) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0824 23:51:29.099010 379 log.go:181] (0xc000125600) Data frame received for 1\nI0824 23:51:29.099058 379 log.go:181] (0xc000b9d220) (1) Data frame handling\nI0824 23:51:29.099093 379 log.go:181] (0xc000b9d220) (1) Data frame sent\nI0824 23:51:29.099122 379 log.go:181] (0xc000125600) (0xc000b9d220) Stream removed, broadcasting: 1\nI0824 23:51:29.099159 379 log.go:181] (0xc000125600) Go away received\nI0824 23:51:29.099629 379 log.go:181] (0xc000125600) (0xc000b9d220) Stream removed, broadcasting: 1\nI0824 23:51:29.099652 379 log.go:181] (0xc000125600) (0xc000bc4500) Stream removed, broadcasting: 3\nI0824 23:51:29.099662 379 log.go:181] (0xc000125600) (0xc000742000) Stream removed, broadcasting: 5\n" Aug 24 23:51:29.109: INFO: stdout: "" Aug 24 23:51:29.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8632 execpodffg6h -- /bin/sh -x -c nc -zv -t -w 2 10.100.95.1 80' Aug 24 23:51:29.339: INFO: stderr: "I0824 23:51:29.248055 397 log.go:181] (0xc000f18000) (0xc0009aa000) Create stream\nI0824 23:51:29.248122 397 log.go:181] (0xc000f18000) (0xc0009aa000) Stream added, broadcasting: 1\nI0824 23:51:29.251002 397 log.go:181] (0xc000f18000) Reply frame received for 1\nI0824 23:51:29.251034 397 log.go:181] (0xc000f18000) (0xc00019ca00) Create stream\nI0824 23:51:29.251046 397 log.go:181] (0xc000f18000) (0xc00019ca00) Stream added, broadcasting: 3\nI0824 23:51:29.251839 397 log.go:181] (0xc000f18000) Reply frame received for 3\nI0824 23:51:29.251861 397 log.go:181] (0xc000f18000) (0xc000209860) Create stream\nI0824 23:51:29.251868 397 log.go:181] (0xc000f18000) (0xc000209860) Stream added, broadcasting: 5\nI0824 23:51:29.252659 397 log.go:181] (0xc000f18000) Reply frame received for 5\nI0824 23:51:29.331659 397 log.go:181] (0xc000f18000) Data frame received for 5\nI0824 23:51:29.331687 397 log.go:181] (0xc000209860) (5) Data frame handling\nI0824 23:51:29.331694 397 log.go:181] (0xc000209860) (5) Data frame sent\nI0824 23:51:29.331699 397 log.go:181] (0xc000f18000) Data frame received for 5\nI0824 23:51:29.331708 397 log.go:181] (0xc000209860) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.95.1 80\nConnection to 10.100.95.1 80 port [tcp/http] succeeded!\nI0824 23:51:29.331719 397 log.go:181] (0xc000f18000) Data frame received for 3\nI0824 23:51:29.331724 397 log.go:181] (0xc00019ca00) (3) Data frame handling\nI0824 23:51:29.332977 397 log.go:181] (0xc000f18000) Data frame received for 1\nI0824 23:51:29.332994 397 log.go:181] (0xc0009aa000) (1) Data frame handling\nI0824 23:51:29.333005 397 log.go:181] (0xc0009aa000) (1) Data frame sent\nI0824 23:51:29.333017 397 log.go:181] (0xc000f18000) (0xc0009aa000) Stream removed, broadcasting: 1\nI0824 23:51:29.333032 397 log.go:181] (0xc000f18000) Go away received\nI0824 23:51:29.333340 397 log.go:181] (0xc000f18000) (0xc0009aa000) Stream removed, broadcasting: 1\nI0824 23:51:29.333356 397 log.go:181] (0xc000f18000) (0xc00019ca00) Stream removed, broadcasting: 3\nI0824 23:51:29.333365 397 log.go:181] (0xc000f18000) (0xc000209860) Stream removed, broadcasting: 5\n" Aug 24 23:51:29.339: INFO: stdout: "" Aug 24 23:51:29.339: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8632 execpodffg6h -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 31870' Aug 24 23:51:29.555: INFO: stderr: "I0824 23:51:29.464947 415 log.go:181] (0xc00096b340) (0xc000be2960) Create stream\nI0824 23:51:29.464992 415 log.go:181] (0xc00096b340) (0xc000be2960) Stream added, broadcasting: 1\nI0824 23:51:29.470915 415 log.go:181] (0xc00096b340) Reply frame received for 1\nI0824 23:51:29.470957 415 log.go:181] (0xc00096b340) (0xc000be2000) Create stream\nI0824 23:51:29.470970 415 log.go:181] (0xc00096b340) (0xc000be2000) Stream added, broadcasting: 3\nI0824 23:51:29.471814 415 log.go:181] (0xc00096b340) Reply frame received for 3\nI0824 23:51:29.471839 415 log.go:181] (0xc00096b340) (0xc000906460) Create stream\nI0824 23:51:29.471847 415 log.go:181] (0xc00096b340) (0xc000906460) Stream added, broadcasting: 5\nI0824 23:51:29.472835 415 log.go:181] (0xc00096b340) Reply frame received for 5\nI0824 23:51:29.546872 415 log.go:181] (0xc00096b340) Data frame received for 3\nI0824 23:51:29.546895 415 log.go:181] (0xc000be2000) (3) Data frame handling\nI0824 23:51:29.546910 415 log.go:181] (0xc00096b340) Data frame received for 5\nI0824 23:51:29.546914 415 log.go:181] (0xc000906460) (5) Data frame handling\nI0824 23:51:29.546920 415 log.go:181] (0xc000906460) (5) Data frame sent\nI0824 23:51:29.546925 415 log.go:181] (0xc00096b340) Data frame received for 5\nI0824 23:51:29.546929 415 log.go:181] (0xc000906460) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 31870\nConnection to 172.18.0.11 31870 port [tcp/31870] succeeded!\nI0824 23:51:29.547894 415 log.go:181] (0xc00096b340) Data frame received for 1\nI0824 23:51:29.547913 415 log.go:181] (0xc000be2960) (1) Data frame handling\nI0824 23:51:29.547923 415 log.go:181] (0xc000be2960) (1) Data frame sent\nI0824 23:51:29.547936 415 log.go:181] (0xc00096b340) (0xc000be2960) Stream removed, broadcasting: 1\nI0824 23:51:29.547949 415 log.go:181] (0xc00096b340) Go away received\nI0824 23:51:29.548250 415 log.go:181] (0xc00096b340) (0xc000be2960) Stream removed, broadcasting: 1\nI0824 23:51:29.548260 415 log.go:181] (0xc00096b340) (0xc000be2000) Stream removed, broadcasting: 3\nI0824 23:51:29.548265 415 log.go:181] (0xc00096b340) (0xc000906460) Stream removed, broadcasting: 5\n" Aug 24 23:51:29.555: INFO: stdout: "" Aug 24 23:51:29.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8632 execpodffg6h -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31870' Aug 24 23:51:29.853: INFO: stderr: "I0824 23:51:29.791762 433 log.go:181] (0xc0002eb810) (0xc000b10960) Create stream\nI0824 23:51:29.791816 433 log.go:181] (0xc0002eb810) (0xc000b10960) Stream added, broadcasting: 1\nI0824 23:51:29.794362 433 log.go:181] (0xc0002eb810) Reply frame received for 1\nI0824 23:51:29.794438 433 log.go:181] (0xc0002eb810) (0xc000b10a00) Create stream\nI0824 23:51:29.794454 433 log.go:181] (0xc0002eb810) (0xc000b10a00) Stream added, broadcasting: 3\nI0824 23:51:29.795381 433 log.go:181] (0xc0002eb810) Reply frame received for 3\nI0824 23:51:29.795413 433 log.go:181] (0xc0002eb810) (0xc000d92f00) Create stream\nI0824 23:51:29.795425 433 log.go:181] (0xc0002eb810) (0xc000d92f00) Stream added, broadcasting: 5\nI0824 23:51:29.796278 433 log.go:181] (0xc0002eb810) Reply frame received for 5\nI0824 23:51:29.839098 433 log.go:181] (0xc0002eb810) Data frame received for 3\nI0824 23:51:29.839126 433 log.go:181] (0xc000b10a00) (3) Data frame handling\nI0824 23:51:29.839146 433 log.go:181] (0xc0002eb810) Data frame received for 5\nI0824 23:51:29.839153 433 log.go:181] (0xc000d92f00) (5) Data frame handling\nI0824 23:51:29.839162 433 log.go:181] (0xc000d92f00) (5) Data frame sent\nI0824 23:51:29.839167 433 log.go:181] (0xc0002eb810) Data frame received for 5\nI0824 23:51:29.839172 433 log.go:181] (0xc000d92f00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 31870\nConnection to 172.18.0.14 31870 port [tcp/31870] succeeded!\nI0824 23:51:29.840566 433 log.go:181] (0xc0002eb810) Data frame received for 1\nI0824 23:51:29.840601 433 log.go:181] (0xc000b10960) (1) Data frame handling\nI0824 23:51:29.840637 433 log.go:181] (0xc000b10960) (1) Data frame sent\nI0824 23:51:29.840667 433 log.go:181] (0xc0002eb810) (0xc000b10960) Stream removed, broadcasting: 1\nI0824 23:51:29.840694 433 log.go:181] (0xc0002eb810) Go away received\nI0824 23:51:29.841157 433 log.go:181] (0xc0002eb810) (0xc000b10960) Stream removed, broadcasting: 1\nI0824 23:51:29.841171 433 log.go:181] (0xc0002eb810) (0xc000b10a00) Stream removed, broadcasting: 3\nI0824 23:51:29.841176 433 log.go:181] (0xc0002eb810) (0xc000d92f00) Stream removed, broadcasting: 5\n" Aug 24 23:51:29.853: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:51:29.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8632" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:19.906 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":85,"skipped":1355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:51:29.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:51:30.107: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597" in namespace "downward-api-9882" to be "Succeeded or Failed" Aug 24 23:51:30.203: INFO: Pod "downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597": Phase="Pending", Reason="", readiness=false. Elapsed: 96.253742ms Aug 24 23:51:32.207: INFO: Pod "downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100591523s Aug 24 23:51:34.212: INFO: Pod "downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104759045s Aug 24 23:51:36.457: INFO: Pod "downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.350019464s STEP: Saw pod success Aug 24 23:51:36.457: INFO: Pod "downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597" satisfied condition "Succeeded or Failed" Aug 24 23:51:36.483: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597 container client-container: STEP: delete the pod Aug 24 23:51:36.766: INFO: Waiting for pod downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597 to disappear Aug 24 23:51:36.813: INFO: Pod downwardapi-volume-db5921aa-4ae5-45f3-9f05-e4e96e3fe597 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:51:36.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9882" for this suite. • [SLOW TEST:6.969 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":86,"skipped":1396,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:51:36.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 24 23:51:41.164: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:51:41.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3702" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1400,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:51:41.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:51:47.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5692" for this suite. • [SLOW TEST:5.784 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":88,"skipped":1403,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:51:47.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-981 Aug 24 23:51:51.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 24 23:51:51.472: INFO: stderr: "I0824 23:51:51.385399 451 log.go:181] (0xc001019080) (0xc0010108c0) Create stream\nI0824 23:51:51.385468 451 log.go:181] (0xc001019080) (0xc0010108c0) Stream added, broadcasting: 1\nI0824 23:51:51.390439 451 log.go:181] (0xc001019080) Reply frame received for 1\nI0824 23:51:51.390467 451 log.go:181] (0xc001019080) (0xc00061a000) Create stream\nI0824 23:51:51.390474 451 log.go:181] (0xc001019080) (0xc00061a000) Stream added, broadcasting: 3\nI0824 23:51:51.391214 451 log.go:181] (0xc001019080) Reply frame received for 3\nI0824 23:51:51.391249 451 log.go:181] (0xc001019080) (0xc000cc4000) Create stream\nI0824 23:51:51.391262 451 log.go:181] (0xc001019080) (0xc000cc4000) Stream added, broadcasting: 5\nI0824 23:51:51.392184 451 log.go:181] (0xc001019080) Reply frame received for 5\nI0824 23:51:51.454461 451 log.go:181] (0xc001019080) Data frame received for 5\nI0824 23:51:51.454484 451 log.go:181] (0xc000cc4000) (5) Data frame handling\nI0824 23:51:51.454495 451 log.go:181] (0xc000cc4000) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0824 23:51:51.460505 451 log.go:181] (0xc001019080) Data frame received for 3\nI0824 23:51:51.460528 451 log.go:181] (0xc00061a000) (3) Data frame handling\nI0824 23:51:51.460548 451 log.go:181] (0xc00061a000) (3) Data frame sent\nI0824 23:51:51.461299 451 log.go:181] (0xc001019080) Data frame received for 5\nI0824 23:51:51.461321 451 log.go:181] (0xc000cc4000) (5) Data frame handling\nI0824 23:51:51.461444 451 log.go:181] (0xc001019080) Data frame received for 3\nI0824 23:51:51.461470 451 log.go:181] (0xc00061a000) (3) Data frame handling\nI0824 23:51:51.463155 451 log.go:181] (0xc001019080) Data frame received for 1\nI0824 23:51:51.463179 451 log.go:181] (0xc0010108c0) (1) Data frame handling\nI0824 23:51:51.463191 451 log.go:181] (0xc0010108c0) (1) Data frame sent\nI0824 23:51:51.463206 451 log.go:181] (0xc001019080) (0xc0010108c0) Stream removed, broadcasting: 1\nI0824 23:51:51.463223 451 log.go:181] (0xc001019080) Go away received\nI0824 23:51:51.463724 451 log.go:181] (0xc001019080) (0xc0010108c0) Stream removed, broadcasting: 1\nI0824 23:51:51.463747 451 log.go:181] (0xc001019080) (0xc00061a000) Stream removed, broadcasting: 3\nI0824 23:51:51.463759 451 log.go:181] (0xc001019080) (0xc000cc4000) Stream removed, broadcasting: 5\n" Aug 24 23:51:51.473: INFO: stdout: "iptables" Aug 24 23:51:51.473: INFO: proxyMode: iptables Aug 24 23:51:51.518: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 24 23:51:51.562: INFO: Pod kube-proxy-mode-detector still exists Aug 24 23:51:53.562: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 24 23:51:53.566: INFO: Pod kube-proxy-mode-detector still exists Aug 24 23:51:55.562: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 24 23:51:55.568: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-981 STEP: creating replication controller affinity-nodeport-timeout in namespace services-981 I0824 23:51:55.645745 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-981, replica count: 3 I0824 23:51:58.696122 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:52:01.696317 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 24 23:52:01.715: INFO: Creating new exec pod Aug 24 23:52:06.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 execpod-affinitykmtkg -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Aug 24 23:52:07.299: INFO: stderr: "I0824 23:52:07.215708 469 log.go:181] (0xc0007ab550) (0xc0007a2960) Create stream\nI0824 23:52:07.215757 469 log.go:181] (0xc0007ab550) (0xc0007a2960) Stream added, broadcasting: 1\nI0824 23:52:07.224554 469 log.go:181] (0xc0007ab550) Reply frame received for 1\nI0824 23:52:07.224608 469 log.go:181] (0xc0007ab550) (0xc00076c000) Create stream\nI0824 23:52:07.224624 469 log.go:181] (0xc0007ab550) (0xc00076c000) Stream added, broadcasting: 3\nI0824 23:52:07.225723 469 log.go:181] (0xc0007ab550) Reply frame received for 3\nI0824 23:52:07.225794 469 log.go:181] (0xc0007ab550) (0xc0001a0140) Create stream\nI0824 23:52:07.225834 469 log.go:181] (0xc0007ab550) (0xc0001a0140) Stream added, broadcasting: 5\nI0824 23:52:07.226546 469 log.go:181] (0xc0007ab550) Reply frame received for 5\nI0824 23:52:07.287324 469 log.go:181] (0xc0007ab550) Data frame received for 5\nI0824 23:52:07.287442 469 log.go:181] (0xc0001a0140) (5) Data frame handling\nI0824 23:52:07.287473 469 log.go:181] (0xc0001a0140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0824 23:52:07.287827 469 log.go:181] (0xc0007ab550) Data frame received for 5\nI0824 23:52:07.287848 469 log.go:181] (0xc0001a0140) (5) Data frame handling\nI0824 23:52:07.287858 469 log.go:181] (0xc0001a0140) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0824 23:52:07.288008 469 log.go:181] (0xc0007ab550) Data frame received for 5\nI0824 23:52:07.288025 469 log.go:181] (0xc0001a0140) (5) Data frame handling\nI0824 23:52:07.288326 469 log.go:181] (0xc0007ab550) Data frame received for 3\nI0824 23:52:07.288349 469 log.go:181] (0xc00076c000) (3) Data frame handling\nI0824 23:52:07.289887 469 log.go:181] (0xc0007ab550) Data frame received for 1\nI0824 23:52:07.289913 469 log.go:181] (0xc0007a2960) (1) Data frame handling\nI0824 23:52:07.289939 469 log.go:181] (0xc0007a2960) (1) Data frame sent\nI0824 23:52:07.289956 469 log.go:181] (0xc0007ab550) (0xc0007a2960) Stream removed, broadcasting: 1\nI0824 23:52:07.289973 469 log.go:181] (0xc0007ab550) Go away received\nI0824 23:52:07.290489 469 log.go:181] (0xc0007ab550) (0xc0007a2960) Stream removed, broadcasting: 1\nI0824 23:52:07.290524 469 log.go:181] (0xc0007ab550) (0xc00076c000) Stream removed, broadcasting: 3\nI0824 23:52:07.290538 469 log.go:181] (0xc0007ab550) (0xc0001a0140) Stream removed, broadcasting: 5\n" Aug 24 23:52:07.299: INFO: stdout: "" Aug 24 23:52:07.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 execpod-affinitykmtkg -- /bin/sh -x -c nc -zv -t -w 2 10.104.148.151 80' Aug 24 23:52:07.517: INFO: stderr: "I0824 23:52:07.439013 487 log.go:181] (0xc0006b3080) (0xc0007348c0) Create stream\nI0824 23:52:07.439063 487 log.go:181] (0xc0006b3080) (0xc0007348c0) Stream added, broadcasting: 1\nI0824 23:52:07.444000 487 log.go:181] (0xc0006b3080) Reply frame received for 1\nI0824 23:52:07.444046 487 log.go:181] (0xc0006b3080) (0xc0003cde00) Create stream\nI0824 23:52:07.444059 487 log.go:181] (0xc0006b3080) (0xc0003cde00) Stream added, broadcasting: 3\nI0824 23:52:07.445046 487 log.go:181] (0xc0006b3080) Reply frame received for 3\nI0824 23:52:07.445076 487 log.go:181] (0xc0006b3080) (0xc000734000) Create stream\nI0824 23:52:07.445087 487 log.go:181] (0xc0006b3080) (0xc000734000) Stream added, broadcasting: 5\nI0824 23:52:07.445808 487 log.go:181] (0xc0006b3080) Reply frame received for 5\nI0824 23:52:07.509569 487 log.go:181] (0xc0006b3080) Data frame received for 3\nI0824 23:52:07.509600 487 log.go:181] (0xc0003cde00) (3) Data frame handling\nI0824 23:52:07.509619 487 log.go:181] (0xc0006b3080) Data frame received for 5\nI0824 23:52:07.509626 487 log.go:181] (0xc000734000) (5) Data frame handling\nI0824 23:52:07.509635 487 log.go:181] (0xc000734000) (5) Data frame sent\nI0824 23:52:07.509641 487 log.go:181] (0xc0006b3080) Data frame received for 5\nI0824 23:52:07.509648 487 log.go:181] (0xc000734000) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.148.151 80\nConnection to 10.104.148.151 80 port [tcp/http] succeeded!\nI0824 23:52:07.510906 487 log.go:181] (0xc0006b3080) Data frame received for 1\nI0824 23:52:07.510931 487 log.go:181] (0xc0007348c0) (1) Data frame handling\nI0824 23:52:07.510943 487 log.go:181] (0xc0007348c0) (1) Data frame sent\nI0824 23:52:07.510957 487 log.go:181] (0xc0006b3080) (0xc0007348c0) Stream removed, broadcasting: 1\nI0824 23:52:07.511059 487 log.go:181] (0xc0006b3080) Go away received\nI0824 23:52:07.511254 487 log.go:181] (0xc0006b3080) (0xc0007348c0) Stream removed, broadcasting: 1\nI0824 23:52:07.511267 487 log.go:181] (0xc0006b3080) (0xc0003cde00) Stream removed, broadcasting: 3\nI0824 23:52:07.511273 487 log.go:181] (0xc0006b3080) (0xc000734000) Stream removed, broadcasting: 5\n" Aug 24 23:52:07.517: INFO: stdout: "" Aug 24 23:52:07.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 execpod-affinitykmtkg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30544' Aug 24 23:52:07.754: INFO: stderr: "I0824 23:52:07.667847 505 log.go:181] (0xc000d2b4a0) (0xc000d22960) Create stream\nI0824 23:52:07.667892 505 log.go:181] (0xc000d2b4a0) (0xc000d22960) Stream added, broadcasting: 1\nI0824 23:52:07.671901 505 log.go:181] (0xc000d2b4a0) Reply frame received for 1\nI0824 23:52:07.671941 505 log.go:181] (0xc000d2b4a0) (0xc000cba0a0) Create stream\nI0824 23:52:07.671954 505 log.go:181] (0xc000d2b4a0) (0xc000cba0a0) Stream added, broadcasting: 3\nI0824 23:52:07.672918 505 log.go:181] (0xc000d2b4a0) Reply frame received for 3\nI0824 23:52:07.672951 505 log.go:181] (0xc000d2b4a0) (0xc000d22000) Create stream\nI0824 23:52:07.672959 505 log.go:181] (0xc000d2b4a0) (0xc000d22000) Stream added, broadcasting: 5\nI0824 23:52:07.674046 505 log.go:181] (0xc000d2b4a0) Reply frame received for 5\nI0824 23:52:07.744446 505 log.go:181] (0xc000d2b4a0) Data frame received for 5\nI0824 23:52:07.744474 505 log.go:181] (0xc000d22000) (5) Data frame handling\nI0824 23:52:07.744489 505 log.go:181] (0xc000d22000) (5) Data frame sent\nI0824 23:52:07.744495 505 log.go:181] (0xc000d2b4a0) Data frame received for 5\nI0824 23:52:07.744500 505 log.go:181] (0xc000d22000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30544\nConnection to 172.18.0.11 30544 port [tcp/30544] succeeded!\nI0824 23:52:07.744654 505 log.go:181] (0xc000d2b4a0) Data frame received for 3\nI0824 23:52:07.744670 505 log.go:181] (0xc000cba0a0) (3) Data frame handling\nI0824 23:52:07.746895 505 log.go:181] (0xc000d2b4a0) Data frame received for 1\nI0824 23:52:07.746917 505 log.go:181] (0xc000d22960) (1) Data frame handling\nI0824 23:52:07.746937 505 log.go:181] (0xc000d22960) (1) Data frame sent\nI0824 23:52:07.746955 505 log.go:181] (0xc000d2b4a0) (0xc000d22960) Stream removed, broadcasting: 1\nI0824 23:52:07.746966 505 log.go:181] (0xc000d2b4a0) Go away received\nI0824 23:52:07.747302 505 log.go:181] (0xc000d2b4a0) (0xc000d22960) Stream removed, broadcasting: 1\nI0824 23:52:07.747325 505 log.go:181] (0xc000d2b4a0) (0xc000cba0a0) Stream removed, broadcasting: 3\nI0824 23:52:07.747333 505 log.go:181] (0xc000d2b4a0) (0xc000d22000) Stream removed, broadcasting: 5\n" Aug 24 23:52:07.754: INFO: stdout: "" Aug 24 23:52:07.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 execpod-affinitykmtkg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30544' Aug 24 23:52:08.087: INFO: stderr: "I0824 23:52:07.986848 523 log.go:181] (0xc000ab8dc0) (0xc000a2e640) Create stream\nI0824 23:52:07.986920 523 log.go:181] (0xc000ab8dc0) (0xc000a2e640) Stream added, broadcasting: 1\nI0824 23:52:07.991677 523 log.go:181] (0xc000ab8dc0) Reply frame received for 1\nI0824 23:52:07.991823 523 log.go:181] (0xc000ab8dc0) (0xc000c96000) Create stream\nI0824 23:52:07.991891 523 log.go:181] (0xc000ab8dc0) (0xc000c96000) Stream added, broadcasting: 3\nI0824 23:52:07.994436 523 log.go:181] (0xc000ab8dc0) Reply frame received for 3\nI0824 23:52:07.994474 523 log.go:181] (0xc000ab8dc0) (0xc000c960a0) Create stream\nI0824 23:52:07.994485 523 log.go:181] (0xc000ab8dc0) (0xc000c960a0) Stream added, broadcasting: 5\nI0824 23:52:07.995450 523 log.go:181] (0xc000ab8dc0) Reply frame received for 5\nI0824 23:52:08.068922 523 log.go:181] (0xc000ab8dc0) Data frame received for 3\nI0824 23:52:08.068960 523 log.go:181] (0xc000c96000) (3) Data frame handling\nI0824 23:52:08.069005 523 log.go:181] (0xc000ab8dc0) Data frame received for 5\nI0824 23:52:08.069039 523 log.go:181] (0xc000c960a0) (5) Data frame handling\nI0824 23:52:08.069071 523 log.go:181] (0xc000c960a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30544\nConnection to 172.18.0.14 30544 port [tcp/30544] succeeded!\nI0824 23:52:08.069519 523 log.go:181] (0xc000ab8dc0) Data frame received for 5\nI0824 23:52:08.069536 523 log.go:181] (0xc000c960a0) (5) Data frame handling\nI0824 23:52:08.071911 523 log.go:181] (0xc000ab8dc0) Data frame received for 1\nI0824 23:52:08.071958 523 log.go:181] (0xc000a2e640) (1) Data frame handling\nI0824 23:52:08.071987 523 log.go:181] (0xc000a2e640) (1) Data frame sent\nI0824 23:52:08.072020 523 log.go:181] (0xc000ab8dc0) (0xc000a2e640) Stream removed, broadcasting: 1\nI0824 23:52:08.072395 523 log.go:181] (0xc000ab8dc0) (0xc000a2e640) Stream removed, broadcasting: 1\nI0824 23:52:08.072414 523 log.go:181] (0xc000ab8dc0) (0xc000c96000) Stream removed, broadcasting: 3\nI0824 23:52:08.072424 523 log.go:181] (0xc000ab8dc0) (0xc000c960a0) Stream removed, broadcasting: 5\n" Aug 24 23:52:08.088: INFO: stdout: "" Aug 24 23:52:08.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 execpod-affinitykmtkg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30544/ ; done' Aug 24 23:52:08.382: INFO: stderr: "I0824 23:52:08.214525 541 log.go:181] (0xc0000314a0) (0xc0007641e0) Create stream\nI0824 23:52:08.214575 541 log.go:181] (0xc0000314a0) (0xc0007641e0) Stream added, broadcasting: 1\nI0824 23:52:08.216416 541 log.go:181] (0xc0000314a0) Reply frame received for 1\nI0824 23:52:08.216461 541 log.go:181] (0xc0000314a0) (0xc000c368c0) Create stream\nI0824 23:52:08.216471 541 log.go:181] (0xc0000314a0) (0xc000c368c0) Stream added, broadcasting: 3\nI0824 23:52:08.217681 541 log.go:181] (0xc0000314a0) Reply frame received for 3\nI0824 23:52:08.217720 541 log.go:181] (0xc0000314a0) (0xc0008d2000) Create stream\nI0824 23:52:08.217732 541 log.go:181] (0xc0000314a0) (0xc0008d2000) Stream added, broadcasting: 5\nI0824 23:52:08.218564 541 log.go:181] (0xc0000314a0) Reply frame received for 5\nI0824 23:52:08.278464 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.278505 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.278519 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.278540 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.278550 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.278561 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.285569 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.285592 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.285603 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.286352 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.286368 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.286374 541 log.go:181] (0xc0008d2000) (5) Data frame sent\nI0824 23:52:08.286379 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.286382 541 log.go:181] (0xc0008d2000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.286394 541 log.go:181] (0xc0008d2000) (5) Data frame sent\nI0824 23:52:08.286399 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.286404 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.286408 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.294171 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.294192 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.294208 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.294861 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.294876 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.294887 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.294902 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.294920 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.294934 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.298283 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.298299 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.298314 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.299099 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.299125 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.299139 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.299155 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.299164 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.299173 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.303625 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.303654 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.303691 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.304380 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.304395 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.304401 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.304408 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.304419 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.304430 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.309626 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.309643 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.309656 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.310071 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.310082 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.310094 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.310110 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.310118 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.310123 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.314235 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.314269 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.314295 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.314691 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.314706 541 log.go:181] (0xc0008d2000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.314764 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.314792 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.314806 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.314821 541 log.go:181] (0xc0008d2000) (5) Data frame sent\nI0824 23:52:08.319904 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.319985 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.320019 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.320963 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.321039 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.321072 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.321252 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.321286 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.321315 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.325125 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.325147 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.325169 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.325655 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.325678 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.325699 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.325886 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.325903 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.325921 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.331098 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.331117 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.331135 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.331590 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.331640 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.331661 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.331682 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.331695 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.331712 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.337429 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.337452 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.337472 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.337914 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.337932 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.337952 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.337982 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.337994 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.338012 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.341573 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.341586 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.341593 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.342098 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.342128 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.342141 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.342159 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.342170 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.342186 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.346076 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.346095 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.346111 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.346768 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.346782 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.346789 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.346819 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.346840 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.346856 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -qI0824 23:52:08.346870 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.346930 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.346962 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.351283 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.351296 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.351303 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.351971 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.351995 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.352008 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI0824 23:52:08.352094 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.352112 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.352127 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.352157 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.352172 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.352184 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n 2 http://172.18.0.11:30544/\nI0824 23:52:08.358676 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.358708 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.358734 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.359602 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.359616 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.359624 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.359654 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.359676 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.359694 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.366232 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.366253 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.366280 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.366809 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.366826 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.366844 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.366868 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.366896 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.366917 541 log.go:181] (0xc0008d2000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.372353 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.372374 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.372388 541 log.go:181] (0xc000c368c0) (3) Data frame sent\nI0824 23:52:08.373321 541 log.go:181] (0xc0000314a0) Data frame received for 5\nI0824 23:52:08.373340 541 log.go:181] (0xc0008d2000) (5) Data frame handling\nI0824 23:52:08.373370 541 log.go:181] (0xc0000314a0) Data frame received for 3\nI0824 23:52:08.373409 541 log.go:181] (0xc000c368c0) (3) Data frame handling\nI0824 23:52:08.375316 541 log.go:181] (0xc0000314a0) Data frame received for 1\nI0824 23:52:08.375331 541 log.go:181] (0xc0007641e0) (1) Data frame handling\nI0824 23:52:08.375349 541 log.go:181] (0xc0007641e0) (1) Data frame sent\nI0824 23:52:08.375360 541 log.go:181] (0xc0000314a0) (0xc0007641e0) Stream removed, broadcasting: 1\nI0824 23:52:08.375391 541 log.go:181] (0xc0000314a0) Go away received\nI0824 23:52:08.375641 541 log.go:181] (0xc0000314a0) (0xc0007641e0) Stream removed, broadcasting: 1\nI0824 23:52:08.375654 541 log.go:181] (0xc0000314a0) (0xc000c368c0) Stream removed, broadcasting: 3\nI0824 23:52:08.375661 541 log.go:181] (0xc0000314a0) (0xc0008d2000) Stream removed, broadcasting: 5\n" Aug 24 23:52:08.383: INFO: stdout: "\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6\naffinity-nodeport-timeout-5f8d6" Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Received response from host: affinity-nodeport-timeout-5f8d6 Aug 24 23:52:08.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 execpod-affinitykmtkg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:30544/' Aug 24 23:52:08.730: INFO: stderr: "I0824 23:52:08.643254 559 log.go:181] (0xc0008b74a0) (0xc0007b68c0) Create stream\nI0824 23:52:08.643323 559 log.go:181] (0xc0008b74a0) (0xc0007b68c0) Stream added, broadcasting: 1\nI0824 23:52:08.648131 559 log.go:181] (0xc0008b74a0) Reply frame received for 1\nI0824 23:52:08.648179 559 log.go:181] (0xc0008b74a0) (0xc00068e140) Create stream\nI0824 23:52:08.648195 559 log.go:181] (0xc0008b74a0) (0xc00068e140) Stream added, broadcasting: 3\nI0824 23:52:08.649088 559 log.go:181] (0xc0008b74a0) Reply frame received for 3\nI0824 23:52:08.649119 559 log.go:181] (0xc0008b74a0) (0xc0007b6000) Create stream\nI0824 23:52:08.649129 559 log.go:181] (0xc0008b74a0) (0xc0007b6000) Stream added, broadcasting: 5\nI0824 23:52:08.649919 559 log.go:181] (0xc0008b74a0) Reply frame received for 5\nI0824 23:52:08.711235 559 log.go:181] (0xc0008b74a0) Data frame received for 5\nI0824 23:52:08.711260 559 log.go:181] (0xc0007b6000) (5) Data frame handling\nI0824 23:52:08.711273 559 log.go:181] (0xc0007b6000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:08.717035 559 log.go:181] (0xc0008b74a0) Data frame received for 3\nI0824 23:52:08.717058 559 log.go:181] (0xc00068e140) (3) Data frame handling\nI0824 23:52:08.717076 559 log.go:181] (0xc00068e140) (3) Data frame sent\nI0824 23:52:08.717467 559 log.go:181] (0xc0008b74a0) Data frame received for 5\nI0824 23:52:08.717499 559 log.go:181] (0xc0007b6000) (5) Data frame handling\nI0824 23:52:08.717523 559 log.go:181] (0xc0008b74a0) Data frame received for 3\nI0824 23:52:08.717538 559 log.go:181] (0xc00068e140) (3) Data frame handling\nI0824 23:52:08.719247 559 log.go:181] (0xc0008b74a0) Data frame received for 1\nI0824 23:52:08.719270 559 log.go:181] (0xc0007b68c0) (1) Data frame handling\nI0824 23:52:08.719286 559 log.go:181] (0xc0007b68c0) (1) Data frame sent\nI0824 23:52:08.719295 559 log.go:181] (0xc0008b74a0) (0xc0007b68c0) Stream removed, broadcasting: 1\nI0824 23:52:08.719306 559 log.go:181] (0xc0008b74a0) Go away received\nI0824 23:52:08.719718 559 log.go:181] (0xc0008b74a0) (0xc0007b68c0) Stream removed, broadcasting: 1\nI0824 23:52:08.719747 559 log.go:181] (0xc0008b74a0) (0xc00068e140) Stream removed, broadcasting: 3\nI0824 23:52:08.719763 559 log.go:181] (0xc0008b74a0) (0xc0007b6000) Stream removed, broadcasting: 5\n" Aug 24 23:52:08.730: INFO: stdout: "affinity-nodeport-timeout-5f8d6" Aug 24 23:52:23.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-981 execpod-affinitykmtkg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.11:30544/' Aug 24 23:52:23.990: INFO: stderr: "I0824 23:52:23.879020 577 log.go:181] (0xc00003b6b0) (0xc000742500) Create stream\nI0824 23:52:23.879082 577 log.go:181] (0xc00003b6b0) (0xc000742500) Stream added, broadcasting: 1\nI0824 23:52:23.881665 577 log.go:181] (0xc00003b6b0) Reply frame received for 1\nI0824 23:52:23.881698 577 log.go:181] (0xc00003b6b0) (0xc0007a45a0) Create stream\nI0824 23:52:23.881711 577 log.go:181] (0xc00003b6b0) (0xc0007a45a0) Stream added, broadcasting: 3\nI0824 23:52:23.882667 577 log.go:181] (0xc00003b6b0) Reply frame received for 3\nI0824 23:52:23.882714 577 log.go:181] (0xc00003b6b0) (0xc0007a4640) Create stream\nI0824 23:52:23.882732 577 log.go:181] (0xc00003b6b0) (0xc0007a4640) Stream added, broadcasting: 5\nI0824 23:52:23.883477 577 log.go:181] (0xc00003b6b0) Reply frame received for 5\nI0824 23:52:23.971361 577 log.go:181] (0xc00003b6b0) Data frame received for 5\nI0824 23:52:23.971388 577 log.go:181] (0xc0007a4640) (5) Data frame handling\nI0824 23:52:23.971404 577 log.go:181] (0xc0007a4640) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30544/\nI0824 23:52:23.975850 577 log.go:181] (0xc00003b6b0) Data frame received for 3\nI0824 23:52:23.975864 577 log.go:181] (0xc0007a45a0) (3) Data frame handling\nI0824 23:52:23.975877 577 log.go:181] (0xc0007a45a0) (3) Data frame sent\nI0824 23:52:23.976644 577 log.go:181] (0xc00003b6b0) Data frame received for 3\nI0824 23:52:23.976670 577 log.go:181] (0xc0007a45a0) (3) Data frame handling\nI0824 23:52:23.977631 577 log.go:181] (0xc00003b6b0) Data frame received for 5\nI0824 23:52:23.977659 577 log.go:181] (0xc0007a4640) (5) Data frame handling\nI0824 23:52:23.978230 577 log.go:181] (0xc00003b6b0) Data frame received for 1\nI0824 23:52:23.978251 577 log.go:181] (0xc000742500) (1) Data frame handling\nI0824 23:52:23.978266 577 log.go:181] (0xc000742500) (1) Data frame sent\nI0824 23:52:23.978280 577 log.go:181] (0xc00003b6b0) (0xc000742500) Stream removed, broadcasting: 1\nI0824 23:52:23.978297 577 log.go:181] (0xc00003b6b0) Go away received\nI0824 23:52:23.978735 577 log.go:181] (0xc00003b6b0) (0xc000742500) Stream removed, broadcasting: 1\nI0824 23:52:23.978758 577 log.go:181] (0xc00003b6b0) (0xc0007a45a0) Stream removed, broadcasting: 3\nI0824 23:52:23.978766 577 log.go:181] (0xc00003b6b0) (0xc0007a4640) Stream removed, broadcasting: 5\n" Aug 24 23:52:23.990: INFO: stdout: "affinity-nodeport-timeout-rtqlg" Aug 24 23:52:23.990: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-981, will wait for the garbage collector to delete the pods Aug 24 23:52:24.117: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.206547ms Aug 24 23:52:24.617: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.227394ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:52:40.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-981" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:53.391 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":89,"skipped":1406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:52:40.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:52:46.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3754" for this suite. • [SLOW TEST:6.228 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":90,"skipped":1430,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:52:46.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5192 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5192 I0824 23:52:47.069808 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5192, replica count: 2 I0824 23:52:50.120260 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:52:53.120541 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 24 23:52:53.120: INFO: Creating new exec pod Aug 24 23:52:58.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5192 execpodrztbb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 24 23:52:58.563: INFO: stderr: "I0824 23:52:58.391217 595 log.go:181] (0xc000ca31e0) (0xc00032c8c0) Create stream\nI0824 23:52:58.391290 595 log.go:181] (0xc000ca31e0) (0xc00032c8c0) Stream added, broadcasting: 1\nI0824 23:52:58.394240 595 log.go:181] (0xc000ca31e0) Reply frame received for 1\nI0824 23:52:58.394279 595 log.go:181] (0xc000ca31e0) (0xc00032c960) Create stream\nI0824 23:52:58.394292 595 log.go:181] (0xc000ca31e0) (0xc00032c960) Stream added, broadcasting: 3\nI0824 23:52:58.395435 595 log.go:181] (0xc000ca31e0) Reply frame received for 3\nI0824 23:52:58.395483 595 log.go:181] (0xc000ca31e0) (0xc000c3c000) Create stream\nI0824 23:52:58.395517 595 log.go:181] (0xc000ca31e0) (0xc000c3c000) Stream added, broadcasting: 5\nI0824 23:52:58.396612 595 log.go:181] (0xc000ca31e0) Reply frame received for 5\nI0824 23:52:58.474569 595 log.go:181] (0xc000ca31e0) Data frame received for 5\nI0824 23:52:58.474596 595 log.go:181] (0xc000c3c000) (5) Data frame handling\nI0824 23:52:58.474620 595 log.go:181] (0xc000c3c000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0824 23:52:58.555526 595 log.go:181] (0xc000ca31e0) Data frame received for 5\nI0824 23:52:58.555564 595 log.go:181] (0xc000c3c000) (5) Data frame handling\nI0824 23:52:58.555588 595 log.go:181] (0xc000c3c000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0824 23:52:58.555645 595 log.go:181] (0xc000ca31e0) Data frame received for 3\nI0824 23:52:58.555672 595 log.go:181] (0xc00032c960) (3) Data frame handling\nI0824 23:52:58.556305 595 log.go:181] (0xc000ca31e0) Data frame received for 5\nI0824 23:52:58.556329 595 log.go:181] (0xc000c3c000) (5) Data frame handling\nI0824 23:52:58.558748 595 log.go:181] (0xc000ca31e0) Data frame received for 1\nI0824 23:52:58.558791 595 log.go:181] (0xc00032c8c0) (1) Data frame handling\nI0824 23:52:58.558814 595 log.go:181] (0xc00032c8c0) (1) Data frame sent\nI0824 23:52:58.558829 595 log.go:181] (0xc000ca31e0) (0xc00032c8c0) Stream removed, broadcasting: 1\nI0824 23:52:58.558854 595 log.go:181] (0xc000ca31e0) Go away received\nI0824 23:52:58.559304 595 log.go:181] (0xc000ca31e0) (0xc00032c8c0) Stream removed, broadcasting: 1\nI0824 23:52:58.559324 595 log.go:181] (0xc000ca31e0) (0xc00032c960) Stream removed, broadcasting: 3\nI0824 23:52:58.559334 595 log.go:181] (0xc000ca31e0) (0xc000c3c000) Stream removed, broadcasting: 5\n" Aug 24 23:52:58.563: INFO: stdout: "" Aug 24 23:52:58.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-5192 execpodrztbb -- /bin/sh -x -c nc -zv -t -w 2 10.105.178.20 80' Aug 24 23:52:58.854: INFO: stderr: "I0824 23:52:58.766569 613 log.go:181] (0xc000e68c60) (0xc0004e9ea0) Create stream\nI0824 23:52:58.766638 613 log.go:181] (0xc000e68c60) (0xc0004e9ea0) Stream added, broadcasting: 1\nI0824 23:52:58.769974 613 log.go:181] (0xc000e68c60) Reply frame received for 1\nI0824 23:52:58.770035 613 log.go:181] (0xc000e68c60) (0xc0004745a0) Create stream\nI0824 23:52:58.770064 613 log.go:181] (0xc000e68c60) (0xc0004745a0) Stream added, broadcasting: 3\nI0824 23:52:58.771200 613 log.go:181] (0xc000e68c60) Reply frame received for 3\nI0824 23:52:58.771238 613 log.go:181] (0xc000e68c60) (0xc000474e60) Create stream\nI0824 23:52:58.771253 613 log.go:181] (0xc000e68c60) (0xc000474e60) Stream added, broadcasting: 5\nI0824 23:52:58.772093 613 log.go:181] (0xc000e68c60) Reply frame received for 5\nI0824 23:52:58.843192 613 log.go:181] (0xc000e68c60) Data frame received for 3\nI0824 23:52:58.843230 613 log.go:181] (0xc0004745a0) (3) Data frame handling\nI0824 23:52:58.843272 613 log.go:181] (0xc000e68c60) Data frame received for 5\nI0824 23:52:58.843295 613 log.go:181] (0xc000474e60) (5) Data frame handling\nI0824 23:52:58.843309 613 log.go:181] (0xc000474e60) (5) Data frame sent\nI0824 23:52:58.843314 613 log.go:181] (0xc000e68c60) Data frame received for 5\nI0824 23:52:58.843318 613 log.go:181] (0xc000474e60) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.178.20 80\nConnection to 10.105.178.20 80 port [tcp/http] succeeded!\nI0824 23:52:58.844600 613 log.go:181] (0xc000e68c60) Data frame received for 1\nI0824 23:52:58.844612 613 log.go:181] (0xc0004e9ea0) (1) Data frame handling\nI0824 23:52:58.844622 613 log.go:181] (0xc0004e9ea0) (1) Data frame sent\nI0824 23:52:58.845020 613 log.go:181] (0xc000e68c60) (0xc0004e9ea0) Stream removed, broadcasting: 1\nI0824 23:52:58.845051 613 log.go:181] (0xc000e68c60) Go away received\nI0824 23:52:58.845469 613 log.go:181] (0xc000e68c60) (0xc0004e9ea0) Stream removed, broadcasting: 1\nI0824 23:52:58.845482 613 log.go:181] (0xc000e68c60) (0xc0004745a0) Stream removed, broadcasting: 3\nI0824 23:52:58.845488 613 log.go:181] (0xc000e68c60) (0xc000474e60) Stream removed, broadcasting: 5\n" Aug 24 23:52:58.854: INFO: stdout: "" Aug 24 23:52:58.854: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:52:59.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5192" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.575 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":91,"skipped":1437,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:52:59.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Aug 24 23:53:04.073: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2099 pod-service-account-9a6ea699-725e-4785-9c24-2001492d053f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 24 23:53:04.309: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2099 pod-service-account-9a6ea699-725e-4785-9c24-2001492d053f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 24 23:53:04.553: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2099 pod-service-account-9a6ea699-725e-4785-9c24-2001492d053f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:53:04.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2099" for this suite. • [SLOW TEST:5.664 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":92,"skipped":1456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:53:04.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:53:05.019: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6" in namespace "projected-1767" to be "Succeeded or Failed" Aug 24 23:53:05.238: INFO: Pod "downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6": Phase="Pending", Reason="", readiness=false. Elapsed: 218.764107ms Aug 24 23:53:07.380: INFO: Pod "downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360912772s Aug 24 23:53:09.384: INFO: Pod "downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.364294987s STEP: Saw pod success Aug 24 23:53:09.384: INFO: Pod "downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6" satisfied condition "Succeeded or Failed" Aug 24 23:53:09.386: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6 container client-container: STEP: delete the pod Aug 24 23:53:09.512: INFO: Waiting for pod downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6 to disappear Aug 24 23:53:09.523: INFO: Pod downwardapi-volume-7979ddad-82da-44a6-b9f2-3341e8675df6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:53:09.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1767" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:53:09.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-694/secret-test-093cc3ee-dce0-4c27-b467-3f8402e3ef49 STEP: Creating a pod to test consume secrets Aug 24 23:53:09.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5" in namespace "secrets-694" to be "Succeeded or Failed" Aug 24 23:53:09.609: INFO: Pod "pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.302297ms Aug 24 23:53:11.674: INFO: Pod "pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083370496s Aug 24 23:53:13.678: INFO: Pod "pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087430923s STEP: Saw pod success Aug 24 23:53:13.678: INFO: Pod "pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5" satisfied condition "Succeeded or Failed" Aug 24 23:53:13.680: INFO: Trying to get logs from node latest-worker pod pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5 container env-test: STEP: delete the pod Aug 24 23:53:13.739: INFO: Waiting for pod pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5 to disappear Aug 24 23:53:13.751: INFO: Pod pod-configmaps-0d4dd286-87b1-477b-87de-36a499aa1ad5 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:53:13.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-694" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1520,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:53:13.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 24 23:53:13.838: INFO: Waiting up to 5m0s for pod "pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c" in namespace "emptydir-4098" to be "Succeeded or Failed" Aug 24 23:53:13.871: INFO: Pod "pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.733433ms Aug 24 23:53:15.876: INFO: Pod "pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037890443s Aug 24 23:53:17.967: INFO: Pod "pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c": Phase="Running", Reason="", readiness=true. Elapsed: 4.128834379s Aug 24 23:53:19.971: INFO: Pod "pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133481636s STEP: Saw pod success Aug 24 23:53:19.972: INFO: Pod "pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c" satisfied condition "Succeeded or Failed" Aug 24 23:53:19.975: INFO: Trying to get logs from node latest-worker pod pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c container test-container: STEP: delete the pod Aug 24 23:53:20.151: INFO: Waiting for pod pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c to disappear Aug 24 23:53:20.153: INFO: Pod pod-c8f6cdd4-d94c-4aa9-b942-e3783aa58b5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:53:20.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4098" for this suite. • [SLOW TEST:6.705 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1537,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:53:20.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:53:29.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4858" for this suite. STEP: Destroying namespace "nsdeletetest-2764" for this suite. Aug 24 23:53:29.530: INFO: Namespace nsdeletetest-2764 was already deleted STEP: Destroying namespace "nsdeletetest-3389" for this suite. • [SLOW TEST:9.073 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":96,"skipped":1543,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:53:29.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 24 23:53:29.789: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:53:40.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8157" for this suite. • [SLOW TEST:10.935 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1550,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:53:40.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-22fcbaa1-649c-4193-ac8c-9eacecbe96c3 STEP: Creating secret with name secret-projected-all-test-volume-f98d65a8-9bd8-461a-af0f-7a864ad8465b STEP: Creating a pod to test Check all projections for projected volume plugin Aug 24 23:53:40.773: INFO: Waiting up to 5m0s for pod "projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25" in namespace "projected-5427" to be "Succeeded or Failed" Aug 24 23:53:40.967: INFO: Pod "projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25": Phase="Pending", Reason="", readiness=false. Elapsed: 194.640002ms Aug 24 23:53:43.135: INFO: Pod "projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.362714228s Aug 24 23:53:45.139: INFO: Pod "projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366616423s Aug 24 23:53:47.143: INFO: Pod "projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.37015592s STEP: Saw pod success Aug 24 23:53:47.143: INFO: Pod "projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25" satisfied condition "Succeeded or Failed" Aug 24 23:53:47.145: INFO: Trying to get logs from node latest-worker pod projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25 container projected-all-volume-test: STEP: delete the pod Aug 24 23:53:47.274: INFO: Waiting for pod projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25 to disappear Aug 24 23:53:47.291: INFO: Pod projected-volume-3f6a480b-0fc9-4e99-84ad-867adc01ba25 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:53:47.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5427" for this suite. • [SLOW TEST:6.828 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":98,"skipped":1558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:53:47.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 24 23:53:55.981: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 24 23:53:55.992: INFO: Pod pod-with-poststart-exec-hook still exists Aug 24 23:53:57.992: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 24 23:53:57.996: INFO: Pod pod-with-poststart-exec-hook still exists Aug 24 23:53:59.992: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 24 23:53:59.996: INFO: Pod pod-with-poststart-exec-hook still exists Aug 24 23:54:01.992: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 24 23:54:01.996: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:54:01.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7924" for this suite. • [SLOW TEST:14.704 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1592,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:54:02.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:54:03.793: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:54:05.805: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:54:07.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910043, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:54:10.842: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:54:11.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3829" for this suite. STEP: Destroying namespace "webhook-3829-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.467 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":100,"skipped":1601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:54:12.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-46 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-46;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-46 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-46;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-46.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-46.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-46.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-46.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-46.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-46.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-46.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-46.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 133.183.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.183.133_udp@PTR;check="$$(dig +tcp +noall +answer +search 133.183.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.183.133_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-46 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-46;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-46 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-46;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-46.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-46.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-46.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-46.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-46.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-46.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-46.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-46.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-46.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-46.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 133.183.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.183.133_udp@PTR;check="$$(dig +tcp +noall +answer +search 133.183.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.183.133_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 24 23:54:21.397: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.413: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.417: INFO: Unable to read wheezy_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.421: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.424: INFO: Unable to read wheezy_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.427: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.430: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.433: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.454: INFO: Unable to read jessie_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.456: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.459: INFO: Unable to read jessie_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.462: INFO: Unable to read jessie_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.465: INFO: Unable to read jessie_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.471: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.474: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:21.492: INFO: Lookups using dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-46 wheezy_tcp@dns-test-service.dns-46 wheezy_udp@dns-test-service.dns-46.svc wheezy_tcp@dns-test-service.dns-46.svc wheezy_udp@_http._tcp.dns-test-service.dns-46.svc wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-46 jessie_tcp@dns-test-service.dns-46 jessie_udp@dns-test-service.dns-46.svc jessie_tcp@dns-test-service.dns-46.svc jessie_udp@_http._tcp.dns-test-service.dns-46.svc jessie_tcp@_http._tcp.dns-test-service.dns-46.svc] Aug 24 23:54:26.535: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.538: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.542: INFO: Unable to read wheezy_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.545: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.548: INFO: Unable to read wheezy_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.551: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.554: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.558: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.581: INFO: Unable to read jessie_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.584: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.587: INFO: Unable to read jessie_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.594: INFO: Unable to read jessie_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.597: INFO: Unable to read jessie_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.601: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.604: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:26.624: INFO: Lookups using dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-46 wheezy_tcp@dns-test-service.dns-46 wheezy_udp@dns-test-service.dns-46.svc wheezy_tcp@dns-test-service.dns-46.svc wheezy_udp@_http._tcp.dns-test-service.dns-46.svc wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-46 jessie_tcp@dns-test-service.dns-46 jessie_udp@dns-test-service.dns-46.svc jessie_tcp@dns-test-service.dns-46.svc jessie_udp@_http._tcp.dns-test-service.dns-46.svc jessie_tcp@_http._tcp.dns-test-service.dns-46.svc] Aug 24 23:54:31.497: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.500: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.503: INFO: Unable to read wheezy_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.525: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.530: INFO: Unable to read wheezy_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.534: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.535: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.551: INFO: Unable to read jessie_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.554: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.557: INFO: Unable to read jessie_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.559: INFO: Unable to read jessie_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.562: INFO: Unable to read jessie_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.568: INFO: Unable to read jessie_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.570: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.572: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:31.587: INFO: Lookups using dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-46 wheezy_tcp@dns-test-service.dns-46 wheezy_udp@dns-test-service.dns-46.svc wheezy_tcp@dns-test-service.dns-46.svc wheezy_udp@_http._tcp.dns-test-service.dns-46.svc wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-46 jessie_tcp@dns-test-service.dns-46 jessie_udp@dns-test-service.dns-46.svc jessie_tcp@dns-test-service.dns-46.svc jessie_udp@_http._tcp.dns-test-service.dns-46.svc jessie_tcp@_http._tcp.dns-test-service.dns-46.svc] Aug 24 23:54:36.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.540: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.549: INFO: Unable to read wheezy_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.551: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.554: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.557: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.578: INFO: Unable to read jessie_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.580: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.583: INFO: Unable to read jessie_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.585: INFO: Unable to read jessie_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.588: INFO: Unable to read jessie_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.591: INFO: Unable to read jessie_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.593: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.595: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:36.611: INFO: Lookups using dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-46 wheezy_tcp@dns-test-service.dns-46 wheezy_udp@dns-test-service.dns-46.svc wheezy_tcp@dns-test-service.dns-46.svc wheezy_udp@_http._tcp.dns-test-service.dns-46.svc wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-46 jessie_tcp@dns-test-service.dns-46 jessie_udp@dns-test-service.dns-46.svc jessie_tcp@dns-test-service.dns-46.svc jessie_udp@_http._tcp.dns-test-service.dns-46.svc jessie_tcp@_http._tcp.dns-test-service.dns-46.svc] Aug 24 23:54:41.496: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.499: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.502: INFO: Unable to read wheezy_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.505: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.508: INFO: Unable to read wheezy_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.510: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.513: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.516: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.578: INFO: Unable to read jessie_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.581: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.584: INFO: Unable to read jessie_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.587: INFO: Unable to read jessie_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.590: INFO: Unable to read jessie_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.595: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.597: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:41.625: INFO: Lookups using dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-46 wheezy_tcp@dns-test-service.dns-46 wheezy_udp@dns-test-service.dns-46.svc wheezy_tcp@dns-test-service.dns-46.svc wheezy_udp@_http._tcp.dns-test-service.dns-46.svc wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-46 jessie_tcp@dns-test-service.dns-46 jessie_udp@dns-test-service.dns-46.svc jessie_tcp@dns-test-service.dns-46.svc jessie_udp@_http._tcp.dns-test-service.dns-46.svc jessie_tcp@_http._tcp.dns-test-service.dns-46.svc] Aug 24 23:54:46.497: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.501: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.505: INFO: Unable to read wheezy_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.509: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.512: INFO: Unable to read wheezy_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.515: INFO: Unable to read wheezy_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.518: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.520: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.537: INFO: Unable to read jessie_udp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.539: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.542: INFO: Unable to read jessie_udp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.545: INFO: Unable to read jessie_tcp@dns-test-service.dns-46 from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.547: INFO: Unable to read jessie_udp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.550: INFO: Unable to read jessie_tcp@dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.552: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.555: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-46.svc from pod dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c: the server could not find the requested resource (get pods dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c) Aug 24 23:54:46.573: INFO: Lookups using dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-46 wheezy_tcp@dns-test-service.dns-46 wheezy_udp@dns-test-service.dns-46.svc wheezy_tcp@dns-test-service.dns-46.svc wheezy_udp@_http._tcp.dns-test-service.dns-46.svc wheezy_tcp@_http._tcp.dns-test-service.dns-46.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-46 jessie_tcp@dns-test-service.dns-46 jessie_udp@dns-test-service.dns-46.svc jessie_tcp@dns-test-service.dns-46.svc jessie_udp@_http._tcp.dns-test-service.dns-46.svc jessie_tcp@_http._tcp.dns-test-service.dns-46.svc] Aug 24 23:54:51.567: INFO: DNS probes using dns-46/dns-test-a08a1dff-90e5-438d-b1ad-864ee0f21c2c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:54:52.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-46" for this suite. • [SLOW TEST:40.061 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":101,"skipped":1725,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:54:52.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:54:52.642: INFO: Creating deployment "webserver-deployment" Aug 24 23:54:52.659: INFO: Waiting for observed generation 1 Aug 24 23:54:55.029: INFO: Waiting for all required pods to come up Aug 24 23:54:55.569: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 24 23:55:07.873: INFO: Waiting for deployment "webserver-deployment" to complete Aug 24 23:55:07.876: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 24 23:55:07.880: INFO: Updating deployment webserver-deployment Aug 24 23:55:07.880: INFO: Waiting for observed generation 2 Aug 24 23:55:10.130: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 24 23:55:10.132: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 24 23:55:10.189: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 24 23:55:10.369: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 24 23:55:10.369: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 24 23:55:10.372: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 24 23:55:10.387: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 24 23:55:10.387: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 24 23:55:10.395: INFO: Updating deployment webserver-deployment Aug 24 23:55:10.395: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 24 23:55:10.928: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 24 23:55:11.318: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 24 23:55:11.765: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3116 /apis/apps/v1/namespaces/deployment-3116/deployments/webserver-deployment 56ec0a40-d511-43fe-894e-34acbf681420 3421585 3 2020-08-24 23:54:52 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-24 23:55:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ff1538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-08-24 23:55:09 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-24 23:55:10 +0000 UTC,LastTransitionTime:2020-08-24 23:55:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 24 23:55:11.833: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-3116 /apis/apps/v1/namespaces/deployment-3116/replicasets/webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 3421646 3 2020-08-24 23:55:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 56ec0a40-d511-43fe-894e-34acbf681420 0xc0039b90a7 0xc0039b90a8}] [] [{kube-controller-manager Update apps/v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56ec0a40-d511-43fe-894e-34acbf681420\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039b9128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:55:11.833: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 24 23:55:11.834: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-3116 /apis/apps/v1/namespaces/deployment-3116/replicasets/webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 3421614 3 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 56ec0a40-d511-43fe-894e-34acbf681420 0xc0039b9187 0xc0039b9188}] [] [{kube-controller-manager Update apps/v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"56ec0a40-d511-43fe-894e-34acbf681420\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0039b9208 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:55:12.062: INFO: Pod "webserver-deployment-795d758f88-7xdk8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-7xdk8 webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-7xdk8 0b46df1a-f377-4483-8edf-bc39ccc047cd 3421631 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc0039b9717 0xc0039b9718}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.062: INFO: Pod "webserver-deployment-795d758f88-8kc6p" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8kc6p webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-8kc6p c929e0cf-4f08-4ba6-a5f4-793567f25475 3421550 0 2020-08-24 23:55:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc0039b9857 0xc0039b9858}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-24 23:55:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.062: INFO: Pod "webserver-deployment-795d758f88-8tdqm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8tdqm webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-8tdqm 9ed30c62-e90c-4632-a178-2a288de4d4b4 3421618 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc0039b9a27 0xc0039b9a28}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.063: INFO: Pod "webserver-deployment-795d758f88-9ppqd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9ppqd webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-9ppqd f1d452cd-13ba-4bbd-a6ff-eb03f3d6c255 3421565 0 2020-08-24 23:55:08 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc0039b9b67 0xc0039b9b68}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-24 23:55:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.063: INFO: Pod "webserver-deployment-795d758f88-bp22z" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bp22z webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-bp22z ef838d80-042d-4256-bf89-1bf567cff83c 3421635 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc0039b9d17 0xc0039b9d18}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.063: INFO: Pod "webserver-deployment-795d758f88-cbpfn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cbpfn webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-cbpfn d16aabca-4895-4dc5-91af-9c4e288f92d0 3421533 0 2020-08-24 23:55:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc0039b9e57 0xc0039b9e58}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-24 23:55:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.064: INFO: Pod "webserver-deployment-795d758f88-jj4x5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jj4x5 webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-jj4x5 ca8cdc6d-b4fc-4c82-b492-935f31bf3971 3421536 0 2020-08-24 23:55:07 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc00370a027 0xc00370a028}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-24 23:55:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.064: INFO: Pod "webserver-deployment-795d758f88-krclp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-krclp webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-krclp 68cc0539-1bcc-4f05-b56e-d49b9fa0a8d9 3421619 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc00370a1e7 0xc00370a1e8}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.064: INFO: Pod "webserver-deployment-795d758f88-msgdw" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-msgdw webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-msgdw a594b9f4-ac35-4815-b6b6-b95cc087259a 3421607 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc00370a327 0xc00370a328}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.064: INFO: Pod "webserver-deployment-795d758f88-qgrw9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qgrw9 webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-qgrw9 7444d700-050c-4340-8575-06faabb9ea6f 3421602 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc00370a467 0xc00370a468}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.065: INFO: Pod "webserver-deployment-795d758f88-spp8h" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-spp8h webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-spp8h 8068ec9c-876a-495b-a7cb-fda4177aeb7b 3421567 0 2020-08-24 23:55:09 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc00370a5a7 0xc00370a5a8}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-24 23:55:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.065: INFO: Pod "webserver-deployment-795d758f88-t6r9g" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t6r9g webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-t6r9g 831aad95-833f-42cf-991f-af12ed518dca 3421659 0 2020-08-24 23:55:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc00370a757 0xc00370a758}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-24 23:55:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.065: INFO: Pod "webserver-deployment-795d758f88-wlhbl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wlhbl webserver-deployment-795d758f88- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-795d758f88-wlhbl 81f9ae04-97ac-4761-88cb-d8e59eb70519 3421617 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 b7230260-d378-4006-baf0-9e87e5d735cf 0xc00370a927 0xc00370a928}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7230260-d378-4006-baf0-9e87e5d735cf\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.065: INFO: Pod "webserver-deployment-dd94f59b7-4n5ff" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4n5ff webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-4n5ff aa90af87-7e76-43f7-a451-4a27fe3a4a7d 3421629 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370aa67 0xc00370aa68}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.065: INFO: Pod "webserver-deployment-dd94f59b7-52p98" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-52p98 webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-52p98 0faf3949-1edf-4201-bfa1-90b32bcfa9fd 3421476 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370b1b7 0xc00370b1b8}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.159\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.159,StartTime:2020-08-24 23:54:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c38b264aed09dad0e951062fbc4c55ab6e6f50b7b3f1c7cd457fc455dcf60770,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.066: INFO: Pod "webserver-deployment-dd94f59b7-66pbw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-66pbw webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-66pbw 10ab4e7c-6af2-4005-88df-21878f05eeb1 3421453 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370b437 0xc00370b438}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.156\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.156,StartTime:2020-08-24 23:54:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f057a2c9b39d5595ee61dd7e7d019a58167efe2d00a64edf5873476f5870eb99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.066: INFO: Pod "webserver-deployment-dd94f59b7-8rgfx" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8rgfx webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-8rgfx f9feb31d-0de5-4af8-9012-c090333346de 3421647 0 2020-08-24 23:55:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370b777 0xc00370b778}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-24 23:55:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.066: INFO: Pod "webserver-deployment-dd94f59b7-9dhr5" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9dhr5 webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-9dhr5 feb9858b-dd6b-4e7e-abb6-549ddd9a4b47 3421462 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370b9b7 0xc00370b9b8}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.155\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.155,StartTime:2020-08-24 23:54:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://696e5e2a6e06474e19e88a7909f52b15fa3e3d3f522944813e0970b7a4fe35e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.066: INFO: Pod "webserver-deployment-dd94f59b7-g6pkh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-g6pkh webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-g6pkh 582107cd-bfce-4a9c-98c9-bf073dffbfec 3421604 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370bb67 0xc00370bb68}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.067: INFO: Pod "webserver-deployment-dd94f59b7-jpqmk" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jpqmk webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-jpqmk 44a016dc-7260-42f0-a221-5af0bd531c7a 3421654 0 2020-08-24 23:55:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370bc97 0xc00370bc98}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-24 23:55:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.067: INFO: Pod "webserver-deployment-dd94f59b7-kqtpv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kqtpv webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-kqtpv 1769fc38-10fa-4c30-a902-1c94f68e194d 3421620 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370be37 0xc00370be38}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.067: INFO: Pod "webserver-deployment-dd94f59b7-llgc4" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-llgc4 webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-llgc4 0f8a11b7-a3e9-4183-ad5a-ee35808404be 3421484 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc00370bf67 0xc00370bf68}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.57,StartTime:2020-08-24 23:54:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5baf864f69a8e5cc635ea221fff89e90fc68511f1d91eb9d2035c5190100273c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.067: INFO: Pod "webserver-deployment-dd94f59b7-lnt98" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lnt98 webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-lnt98 56b20da0-f8b6-4c83-a29d-ae65ee8a76d5 3421488 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1a117 0xc002e1a118}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.58,StartTime:2020-08-24 23:54:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f6a24c1299358653e951bf4c59786d2da0a18a3de85b95ec7c47266744fcbebd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.067: INFO: Pod "webserver-deployment-dd94f59b7-pb622" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pb622 webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-pb622 82d59174-9802-4d7f-a3f0-a3e5ce169bcb 3421481 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1a2c7 0xc002e1a2c8}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.158\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.158,StartTime:2020-08-24 23:54:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e2752160448ec76e5e4d928388d56480c076545504c741ff884f90082e0545cc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.068: INFO: Pod "webserver-deployment-dd94f59b7-pzkmg" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pzkmg webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-pzkmg 7bc47c0f-3eee-47b2-b0e4-37c7162561ae 3421605 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1a487 0xc002e1a488}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.068: INFO: Pod "webserver-deployment-dd94f59b7-qfxst" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qfxst webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-qfxst 11d8655b-bacb-4c1e-a2fc-0e6de999d4bb 3421633 0 2020-08-24 23:55:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1a5b7 0xc002e1a5b8}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:,StartTime:2020-08-24 23:55:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.068: INFO: Pod "webserver-deployment-dd94f59b7-qpvs2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qpvs2 webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-qpvs2 cf575518-8078-4122-af74-83044fb0971e 3421611 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1a747 0xc002e1a748}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.068: INFO: Pod "webserver-deployment-dd94f59b7-qxs7h" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qxs7h webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-qxs7h f6de5f1c-863a-45b1-a390-ef6cd0530c59 3421457 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1a877 0xc002e1a878}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.157\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.157,StartTime:2020-08-24 23:54:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42d02c201841aa21ae3fc708053324e0220bc7c495ddd5a7c76cbeae6933d3d3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.068: INFO: Pod "webserver-deployment-dd94f59b7-rrmcb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rrmcb webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-rrmcb a852083c-1e0b-4ab3-989e-544253f6cecb 3421630 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1aa27 0xc002e1aa28}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.069: INFO: Pod "webserver-deployment-dd94f59b7-sdgvq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sdgvq webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-sdgvq 98173ef5-b555-47ac-a49b-059adb778f90 3421615 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1ab57 0xc002e1ab58}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.069: INFO: Pod "webserver-deployment-dd94f59b7-tctcv" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tctcv webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-tctcv ae181bc2-276d-4f41-9a6d-78b58178becc 3421628 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1ac87 0xc002e1ac88}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.069: INFO: Pod "webserver-deployment-dd94f59b7-xd5fw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xd5fw webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-xd5fw 322f316e-c528-404f-9082-ae9373bf1a76 3421491 0 2020-08-24 23:54:52 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1adb7 0xc002e1adb8}] [] [{kube-controller-manager Update v1 2020-08-24 23:54:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:55:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:54:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.59,StartTime:2020-08-24 23:54:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:55:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f74f74c8f5f7fc04a0c334b4e65d7fc7f97fc380869329f607e14d4c412966ca,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 24 23:55:12.069: INFO: Pod "webserver-deployment-dd94f59b7-xsmzh" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-xsmzh webserver-deployment-dd94f59b7- deployment-3116 /api/v1/namespaces/deployment-3116/pods/webserver-deployment-dd94f59b7-xsmzh 9258ccde-4ad4-4de7-a71d-eec0c5bca060 3421612 0 2020-08-24 23:55:11 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 239551d7-91b3-4ae5-83d9-a60ee6000083 0xc002e1af67 0xc002e1af68}] [] [{kube-controller-manager Update v1 2020-08-24 23:55:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"239551d7-91b3-4ae5-83d9-a60ee6000083\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tm9kp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tm9kp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tm9kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:55:12.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3116" for this suite. • [SLOW TEST:19.671 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":102,"skipped":1743,"failed":0} SSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:55:12.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-d6152a33-abe2-4555-8e60-f20d070695c4 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:55:12.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3934" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":103,"skipped":1748,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:55:12.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8 STEP: creating replication controller externalsvc in namespace services-8 I0824 23:55:15.220611 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-8, replica count: 2 I0824 23:55:18.271029 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:55:21.271160 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:55:24.271408 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:55:27.271651 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:55:30.271928 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:55:33.272127 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0824 23:55:36.272342 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 24 23:55:37.045: INFO: Creating new exec pod Aug 24 23:55:48.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-8 execpod7kpfl -- /bin/sh -x -c nslookup clusterip-service.services-8.svc.cluster.local' Aug 24 23:55:50.187: INFO: stderr: "I0824 23:55:50.086761 684 log.go:181] (0xc0001f60b0) (0xc000902500) Create stream\nI0824 23:55:50.086819 684 log.go:181] (0xc0001f60b0) (0xc000902500) Stream added, broadcasting: 1\nI0824 23:55:50.088531 684 log.go:181] (0xc0001f60b0) Reply frame received for 1\nI0824 23:55:50.088568 684 log.go:181] (0xc0001f60b0) (0xc0009039a0) Create stream\nI0824 23:55:50.088578 684 log.go:181] (0xc0001f60b0) (0xc0009039a0) Stream added, broadcasting: 3\nI0824 23:55:50.089399 684 log.go:181] (0xc0001f60b0) Reply frame received for 3\nI0824 23:55:50.089424 684 log.go:181] (0xc0001f60b0) (0xc000776500) Create stream\nI0824 23:55:50.089433 684 log.go:181] (0xc0001f60b0) (0xc000776500) Stream added, broadcasting: 5\nI0824 23:55:50.090548 684 log.go:181] (0xc0001f60b0) Reply frame received for 5\nI0824 23:55:50.168527 684 log.go:181] (0xc0001f60b0) Data frame received for 5\nI0824 23:55:50.168565 684 log.go:181] (0xc000776500) (5) Data frame handling\nI0824 23:55:50.168591 684 log.go:181] (0xc000776500) (5) Data frame sent\n+ nslookup clusterip-service.services-8.svc.cluster.local\nI0824 23:55:50.174989 684 log.go:181] (0xc0001f60b0) Data frame received for 3\nI0824 23:55:50.175023 684 log.go:181] (0xc0009039a0) (3) Data frame handling\nI0824 23:55:50.175045 684 log.go:181] (0xc0009039a0) (3) Data frame sent\nI0824 23:55:50.175683 684 log.go:181] (0xc0001f60b0) Data frame received for 3\nI0824 23:55:50.175707 684 log.go:181] (0xc0009039a0) (3) Data frame handling\nI0824 23:55:50.175729 684 log.go:181] (0xc0009039a0) (3) Data frame sent\nI0824 23:55:50.177822 684 log.go:181] (0xc0001f60b0) Data frame received for 5\nI0824 23:55:50.177846 684 log.go:181] (0xc000776500) (5) Data frame handling\nI0824 23:55:50.178009 684 log.go:181] (0xc0001f60b0) Data frame received for 3\nI0824 23:55:50.178026 684 log.go:181] (0xc0009039a0) (3) Data frame handling\nI0824 23:55:50.179720 684 log.go:181] (0xc0001f60b0) Data frame received for 1\nI0824 23:55:50.179741 684 log.go:181] (0xc000902500) (1) Data frame handling\nI0824 23:55:50.179757 684 log.go:181] (0xc000902500) (1) Data frame sent\nI0824 23:55:50.179772 684 log.go:181] (0xc0001f60b0) (0xc000902500) Stream removed, broadcasting: 1\nI0824 23:55:50.179788 684 log.go:181] (0xc0001f60b0) Go away received\nI0824 23:55:50.180081 684 log.go:181] (0xc0001f60b0) (0xc000902500) Stream removed, broadcasting: 1\nI0824 23:55:50.180093 684 log.go:181] (0xc0001f60b0) (0xc0009039a0) Stream removed, broadcasting: 3\nI0824 23:55:50.180099 684 log.go:181] (0xc0001f60b0) (0xc000776500) Stream removed, broadcasting: 5\n" Aug 24 23:55:50.188: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-8.svc.cluster.local\tcanonical name = externalsvc.services-8.svc.cluster.local.\nName:\texternalsvc.services-8.svc.cluster.local\nAddress: 10.103.4.209\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8, will wait for the garbage collector to delete the pods Aug 24 23:55:51.462: INFO: Deleting ReplicationController externalsvc took: 793.233341ms Aug 24 23:55:52.562: INFO: Terminating ReplicationController externalsvc pods took: 1.100238374s Aug 24 23:56:02.773: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:56:03.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:52.096 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":104,"skipped":1754,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:56:04.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 24 23:56:06.503: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:56:17.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7801" for this suite. • [SLOW TEST:12.542 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":105,"skipped":1756,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:56:17.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-9e6fca27-903f-4e80-b687-d0705897547c [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:56:17.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9197" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":106,"skipped":1771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:56:17.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 24 23:56:21.613: INFO: &Pod{ObjectMeta:{send-events-18b5d73f-78e8-42e5-9f24-b5cbd4db17bd events-4615 /api/v1/namespaces/events-4615/pods/send-events-18b5d73f-78e8-42e5-9f24-b5cbd4db17bd 80f5fb1e-0e0d-43ec-89be-a36a6ba91fd8 3422270 0 2020-08-24 23:56:17 +0000 UTC map[name:foo time:573229730] map[] [] [] [{e2e.test Update v1 2020-08-24 23:56:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:56:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.174\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gzcwp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gzcwp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gzcwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:56:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:56:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:56:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.174,StartTime:2020-08-24 23:56:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:56:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://fb8b7c43717824d96af2f2a26c073be9a0b8af8c200d108aa800789ccdebad5e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.174,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 24 23:56:23.618: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 24 23:56:25.622: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:56:25.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4615" for this suite. • [SLOW TEST:8.181 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":107,"skipped":1808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:56:25.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-8931 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 24 23:56:25.747: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 24 23:56:25.798: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 24 23:56:28.079: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 24 23:56:29.810: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 24 23:56:31.802: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 24 23:56:33.802: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 24 23:56:35.802: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 24 23:56:37.874: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 24 23:56:39.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 24 23:56:41.803: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 24 23:56:43.802: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 24 23:56:43.807: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 24 23:56:45.810: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 24 23:56:47.811: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 24 23:56:51.844: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.176:8080/dial?request=hostname&protocol=http&host=10.244.2.77&port=8080&tries=1'] Namespace:pod-network-test-8931 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:56:51.845: INFO: >>> kubeConfig: /root/.kube/config I0824 23:56:51.879880 7 log.go:181] (0xc000851810) (0xc0036f0640) Create stream I0824 23:56:51.879916 7 log.go:181] (0xc000851810) (0xc0036f0640) Stream added, broadcasting: 1 I0824 23:56:51.882079 7 log.go:181] (0xc000851810) Reply frame received for 1 I0824 23:56:51.882122 7 log.go:181] (0xc000851810) (0xc00053ad20) Create stream I0824 23:56:51.882137 7 log.go:181] (0xc000851810) (0xc00053ad20) Stream added, broadcasting: 3 I0824 23:56:51.883152 7 log.go:181] (0xc000851810) Reply frame received for 3 I0824 23:56:51.883180 7 log.go:181] (0xc000851810) (0xc002f06500) Create stream I0824 23:56:51.883189 7 log.go:181] (0xc000851810) (0xc002f06500) Stream added, broadcasting: 5 I0824 23:56:51.884286 7 log.go:181] (0xc000851810) Reply frame received for 5 I0824 23:56:51.971362 7 log.go:181] (0xc000851810) Data frame received for 3 I0824 23:56:51.971386 7 log.go:181] (0xc00053ad20) (3) Data frame handling I0824 23:56:51.971416 7 log.go:181] (0xc00053ad20) (3) Data frame sent I0824 23:56:51.971767 7 log.go:181] (0xc000851810) Data frame received for 3 I0824 23:56:51.971838 7 log.go:181] (0xc00053ad20) (3) Data frame handling I0824 23:56:51.971981 7 log.go:181] (0xc000851810) Data frame received for 5 I0824 23:56:51.972005 7 log.go:181] (0xc002f06500) (5) Data frame handling I0824 23:56:51.973774 7 log.go:181] (0xc000851810) Data frame received for 1 I0824 23:56:51.973793 7 log.go:181] (0xc0036f0640) (1) Data frame handling I0824 23:56:51.973808 7 log.go:181] (0xc0036f0640) (1) Data frame sent I0824 23:56:51.973821 7 log.go:181] (0xc000851810) (0xc0036f0640) Stream removed, broadcasting: 1 I0824 23:56:51.973836 7 log.go:181] (0xc000851810) Go away received I0824 23:56:51.973919 7 log.go:181] (0xc000851810) (0xc0036f0640) Stream removed, broadcasting: 1 I0824 23:56:51.973936 7 log.go:181] (0xc000851810) (0xc00053ad20) Stream removed, broadcasting: 3 I0824 23:56:51.973942 7 log.go:181] (0xc000851810) (0xc002f06500) Stream removed, broadcasting: 5 Aug 24 23:56:51.973: INFO: Waiting for responses: map[] Aug 24 23:56:51.977: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.176:8080/dial?request=hostname&protocol=http&host=10.244.1.175&port=8080&tries=1'] Namespace:pod-network-test-8931 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 24 23:56:51.977: INFO: >>> kubeConfig: /root/.kube/config I0824 23:56:52.006849 7 log.go:181] (0xc000851b80) (0xc0036f0820) Create stream I0824 23:56:52.006883 7 log.go:181] (0xc000851b80) (0xc0036f0820) Stream added, broadcasting: 1 I0824 23:56:52.008635 7 log.go:181] (0xc000851b80) Reply frame received for 1 I0824 23:56:52.008665 7 log.go:181] (0xc000851b80) (0xc00053ae60) Create stream I0824 23:56:52.008675 7 log.go:181] (0xc000851b80) (0xc00053ae60) Stream added, broadcasting: 3 I0824 23:56:52.009789 7 log.go:181] (0xc000851b80) Reply frame received for 3 I0824 23:56:52.009830 7 log.go:181] (0xc000851b80) (0xc00053afa0) Create stream I0824 23:56:52.009840 7 log.go:181] (0xc000851b80) (0xc00053afa0) Stream added, broadcasting: 5 I0824 23:56:52.010605 7 log.go:181] (0xc000851b80) Reply frame received for 5 I0824 23:56:52.085190 7 log.go:181] (0xc000851b80) Data frame received for 3 I0824 23:56:52.085214 7 log.go:181] (0xc00053ae60) (3) Data frame handling I0824 23:56:52.085226 7 log.go:181] (0xc00053ae60) (3) Data frame sent I0824 23:56:52.085794 7 log.go:181] (0xc000851b80) Data frame received for 5 I0824 23:56:52.085818 7 log.go:181] (0xc00053afa0) (5) Data frame handling I0824 23:56:52.085837 7 log.go:181] (0xc000851b80) Data frame received for 3 I0824 23:56:52.085846 7 log.go:181] (0xc00053ae60) (3) Data frame handling I0824 23:56:52.087316 7 log.go:181] (0xc000851b80) Data frame received for 1 I0824 23:56:52.087338 7 log.go:181] (0xc0036f0820) (1) Data frame handling I0824 23:56:52.087352 7 log.go:181] (0xc0036f0820) (1) Data frame sent I0824 23:56:52.087378 7 log.go:181] (0xc000851b80) (0xc0036f0820) Stream removed, broadcasting: 1 I0824 23:56:52.087451 7 log.go:181] (0xc000851b80) Go away received I0824 23:56:52.087504 7 log.go:181] (0xc000851b80) (0xc0036f0820) Stream removed, broadcasting: 1 I0824 23:56:52.087524 7 log.go:181] (0xc000851b80) (0xc00053ae60) Stream removed, broadcasting: 3 I0824 23:56:52.087544 7 log.go:181] (0xc000851b80) (0xc00053afa0) Stream removed, broadcasting: 5 Aug 24 23:56:52.087: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:56:52.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8931" for this suite. • [SLOW TEST:26.421 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1835,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:56:52.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Aug 24 23:56:52.220: INFO: Waiting up to 5m0s for pod "client-containers-ec7282df-53af-42cc-a197-f52c0076be64" in namespace "containers-3780" to be "Succeeded or Failed" Aug 24 23:56:52.254: INFO: Pod "client-containers-ec7282df-53af-42cc-a197-f52c0076be64": Phase="Pending", Reason="", readiness=false. Elapsed: 33.969181ms Aug 24 23:56:54.302: INFO: Pod "client-containers-ec7282df-53af-42cc-a197-f52c0076be64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081678799s Aug 24 23:56:56.400: INFO: Pod "client-containers-ec7282df-53af-42cc-a197-f52c0076be64": Phase="Running", Reason="", readiness=true. Elapsed: 4.180040225s Aug 24 23:56:58.754: INFO: Pod "client-containers-ec7282df-53af-42cc-a197-f52c0076be64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.533619786s STEP: Saw pod success Aug 24 23:56:58.754: INFO: Pod "client-containers-ec7282df-53af-42cc-a197-f52c0076be64" satisfied condition "Succeeded or Failed" Aug 24 23:56:58.840: INFO: Trying to get logs from node latest-worker pod client-containers-ec7282df-53af-42cc-a197-f52c0076be64 container test-container: STEP: delete the pod Aug 24 23:56:59.535: INFO: Waiting for pod client-containers-ec7282df-53af-42cc-a197-f52c0076be64 to disappear Aug 24 23:56:59.807: INFO: Pod client-containers-ec7282df-53af-42cc-a197-f52c0076be64 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:56:59.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3780" for this suite. • [SLOW TEST:7.884 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":109,"skipped":1843,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:56:59.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 24 23:57:06.619: INFO: Successfully updated pod "adopt-release-7cw9k" STEP: Checking that the Job readopts the Pod Aug 24 23:57:06.619: INFO: Waiting up to 15m0s for pod "adopt-release-7cw9k" in namespace "job-5809" to be "adopted" Aug 24 23:57:06.682: INFO: Pod "adopt-release-7cw9k": Phase="Running", Reason="", readiness=true. Elapsed: 62.738119ms Aug 24 23:57:08.685: INFO: Pod "adopt-release-7cw9k": Phase="Running", Reason="", readiness=true. Elapsed: 2.06599073s Aug 24 23:57:08.685: INFO: Pod "adopt-release-7cw9k" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 24 23:57:09.194: INFO: Successfully updated pod "adopt-release-7cw9k" STEP: Checking that the Job releases the Pod Aug 24 23:57:09.194: INFO: Waiting up to 15m0s for pod "adopt-release-7cw9k" in namespace "job-5809" to be "released" Aug 24 23:57:09.237: INFO: Pod "adopt-release-7cw9k": Phase="Running", Reason="", readiness=true. Elapsed: 43.132047ms Aug 24 23:57:09.237: INFO: Pod "adopt-release-7cw9k" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:57:09.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5809" for this suite. • [SLOW TEST:9.392 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":110,"skipped":1871,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:57:09.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-jszg STEP: Creating a pod to test atomic-volume-subpath Aug 24 23:57:09.635: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jszg" in namespace "subpath-5867" to be "Succeeded or Failed" Aug 24 23:57:09.673: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Pending", Reason="", readiness=false. Elapsed: 38.734956ms Aug 24 23:57:11.678: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043025794s Aug 24 23:57:13.682: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047269176s Aug 24 23:57:15.686: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 6.051117408s Aug 24 23:57:17.690: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 8.054823064s Aug 24 23:57:19.694: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 10.059010256s Aug 24 23:57:21.698: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 12.063591388s Aug 24 23:57:23.703: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 14.068090267s Aug 24 23:57:25.706: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 16.071303812s Aug 24 23:57:27.767: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 18.132281592s Aug 24 23:57:29.993: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 20.358099353s Aug 24 23:57:31.996: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 22.361672196s Aug 24 23:57:34.210: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Running", Reason="", readiness=true. Elapsed: 24.575631871s Aug 24 23:57:36.226: INFO: Pod "pod-subpath-test-projected-jszg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.59147692s STEP: Saw pod success Aug 24 23:57:36.226: INFO: Pod "pod-subpath-test-projected-jszg" satisfied condition "Succeeded or Failed" Aug 24 23:57:36.229: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-jszg container test-container-subpath-projected-jszg: STEP: delete the pod Aug 24 23:57:36.703: INFO: Waiting for pod pod-subpath-test-projected-jszg to disappear Aug 24 23:57:36.759: INFO: Pod pod-subpath-test-projected-jszg no longer exists STEP: Deleting pod pod-subpath-test-projected-jszg Aug 24 23:57:36.759: INFO: Deleting pod "pod-subpath-test-projected-jszg" in namespace "subpath-5867" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:57:36.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5867" for this suite. • [SLOW TEST:28.370 seconds] [sig-storage] Subpath /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":111,"skipped":1888,"failed":0} SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:57:37.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:57:48.815: INFO: Waiting up to 5m0s for pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64" in namespace "pods-2883" to be "Succeeded or Failed" Aug 24 23:57:49.227: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Pending", Reason="", readiness=false. Elapsed: 412.325546ms Aug 24 23:57:51.400: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.585044007s Aug 24 23:57:53.623: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.808466414s Aug 24 23:57:56.534: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Pending", Reason="", readiness=false. Elapsed: 7.719198369s Aug 24 23:57:58.569: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Pending", Reason="", readiness=false. Elapsed: 9.753891465s Aug 24 23:58:00.760: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Pending", Reason="", readiness=false. Elapsed: 11.94496115s Aug 24 23:58:02.803: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Pending", Reason="", readiness=false. Elapsed: 13.988187943s Aug 24 23:58:05.857: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Running", Reason="", readiness=true. Elapsed: 17.042358045s Aug 24 23:58:07.861: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.046435484s STEP: Saw pod success Aug 24 23:58:07.861: INFO: Pod "client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64" satisfied condition "Succeeded or Failed" Aug 24 23:58:07.864: INFO: Trying to get logs from node latest-worker2 pod client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64 container env3cont: STEP: delete the pod Aug 24 23:58:08.315: INFO: Waiting for pod client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64 to disappear Aug 24 23:58:08.570: INFO: Pod client-envvars-5df711ec-ed10-4fae-b059-ddaed3527d64 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:58:08.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2883" for this suite. • [SLOW TEST:30.847 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":112,"skipped":1890,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:58:08.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:58:08.727: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 24 23:58:08.770: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 24 23:58:13.922: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 24 23:58:13.922: INFO: Creating deployment "test-rolling-update-deployment" Aug 24 23:58:14.222: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 24 23:58:14.405: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 24 23:58:16.729: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 24 23:58:17.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910294, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:58:19.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910294, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:58:21.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910294, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:58:23.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910295, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910302, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910294, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:58:25.282: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 24 23:58:25.489: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7303 /apis/apps/v1/namespaces/deployment-7303/deployments/test-rolling-update-deployment ef2e5764-8f7b-46f8-bc45-500344a1cb5e 3422903 1 2020-08-24 23:58:13 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-24 23:58:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-24 23:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0035d0658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-24 23:58:15 +0000 UTC,LastTransitionTime:2020-08-24 23:58:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-08-24 23:58:23 +0000 UTC,LastTransitionTime:2020-08-24 23:58:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 24 23:58:25.492: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-7303 /apis/apps/v1/namespaces/deployment-7303/replicasets/test-rolling-update-deployment-c4cb8d6d9 333787bc-2a33-41ff-8d35-3e248f11897c 3422889 1 2020-08-24 23:58:14 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ef2e5764-8f7b-46f8-bc45-500344a1cb5e 0xc0032fa790 0xc0032fa791}] [] [{kube-controller-manager Update apps/v1 2020-08-24 23:58:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2e5764-8f7b-46f8-bc45-500344a1cb5e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0032fa808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:58:25.492: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 24 23:58:25.492: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7303 /apis/apps/v1/namespaces/deployment-7303/replicasets/test-rolling-update-controller d6059b5c-f0b5-451f-8bff-e45c62f0376a 3422901 2 2020-08-24 23:58:08 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ef2e5764-8f7b-46f8-bc45-500344a1cb5e 0xc0032fa687 0xc0032fa688}] [] [{e2e.test Update apps/v1 2020-08-24 23:58:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-24 23:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ef2e5764-8f7b-46f8-bc45-500344a1cb5e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0032fa728 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 24 23:58:25.495: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-72hc2" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-72hc2 test-rolling-update-deployment-c4cb8d6d9- deployment-7303 /api/v1/namespaces/deployment-7303/pods/test-rolling-update-deployment-c4cb8d6d9-72hc2 084f00f4-40b9-4f81-a8b0-05d555485a0a 3422888 0 2020-08-24 23:58:14 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 333787bc-2a33-41ff-8d35-3e248f11897c 0xc002c63110 0xc002c63111}] [] [{kube-controller-manager Update v1 2020-08-24 23:58:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"333787bc-2a33-41ff-8d35-3e248f11897c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-24 23:58:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.180\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s2qxh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s2qxh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s2qxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:58:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:58:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:58:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-24 23:58:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.180,StartTime:2020-08-24 23:58:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-24 23:58:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://95f58a89f0ea2e483e5420b0e95cd8798d2dd6d008469b55b80107a38c6c409b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.180,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:58:25.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7303" for this suite. • [SLOW TEST:16.916 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":113,"skipped":1895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:58:25.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Aug 24 23:58:27.278: INFO: Waiting up to 5m0s for pod "var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458" in namespace "var-expansion-439" to be "Succeeded or Failed" Aug 24 23:58:27.342: INFO: Pod "var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458": Phase="Pending", Reason="", readiness=false. Elapsed: 63.229432ms Aug 24 23:58:29.346: INFO: Pod "var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067468835s Aug 24 23:58:31.426: INFO: Pod "var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147236609s Aug 24 23:58:33.430: INFO: Pod "var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151187108s STEP: Saw pod success Aug 24 23:58:33.430: INFO: Pod "var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458" satisfied condition "Succeeded or Failed" Aug 24 23:58:33.433: INFO: Trying to get logs from node latest-worker pod var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458 container dapi-container: STEP: delete the pod Aug 24 23:58:33.490: INFO: Waiting for pod var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458 to disappear Aug 24 23:58:33.508: INFO: Pod var-expansion-1d5dd3fb-efd9-49a3-959a-1b2b7661d458 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:58:33.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-439" for this suite. • [SLOW TEST:8.039 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":114,"skipped":1989,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:58:33.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 24 23:58:33.673: INFO: Waiting up to 5m0s for pod "pod-528e9d7f-a98e-4684-b471-7871d18dfd58" in namespace "emptydir-7419" to be "Succeeded or Failed" Aug 24 23:58:33.682: INFO: Pod "pod-528e9d7f-a98e-4684-b471-7871d18dfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293517ms Aug 24 23:58:35.685: INFO: Pod "pod-528e9d7f-a98e-4684-b471-7871d18dfd58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01181005s Aug 24 23:58:37.927: INFO: Pod "pod-528e9d7f-a98e-4684-b471-7871d18dfd58": Phase="Running", Reason="", readiness=true. Elapsed: 4.253485541s Aug 24 23:58:39.931: INFO: Pod "pod-528e9d7f-a98e-4684-b471-7871d18dfd58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.257514707s STEP: Saw pod success Aug 24 23:58:39.931: INFO: Pod "pod-528e9d7f-a98e-4684-b471-7871d18dfd58" satisfied condition "Succeeded or Failed" Aug 24 23:58:39.934: INFO: Trying to get logs from node latest-worker pod pod-528e9d7f-a98e-4684-b471-7871d18dfd58 container test-container: STEP: delete the pod Aug 24 23:58:40.092: INFO: Waiting for pod pod-528e9d7f-a98e-4684-b471-7871d18dfd58 to disappear Aug 24 23:58:40.107: INFO: Pod pod-528e9d7f-a98e-4684-b471-7871d18dfd58 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:58:40.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7419" for this suite. • [SLOW TEST:6.570 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1996,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:58:40.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:58:41.213: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:58:43.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 23:58:45.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910321, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:58:48.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 24 23:58:48.914: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:58:49.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3220" for this suite. STEP: Destroying namespace "webhook-3220-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.331 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":116,"skipped":2007,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:58:49.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-c668a8a0-1a82-4bc0-bb00-855761272027 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:58:57.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2062" for this suite. • [SLOW TEST:7.788 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":117,"skipped":2010,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:58:57.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 24 23:58:57.911: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 24 23:58:59.922: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910337, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910337, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910338, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910337, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 24 23:59:03.067: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 24 23:59:03.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6987-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:59:04.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-761" for this suite. STEP: Destroying namespace "webhook-761-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.504 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":118,"skipped":2012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:59:04.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 24 23:59:04.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b" in namespace "downward-api-8283" to be "Succeeded or Failed" Aug 24 23:59:04.984: INFO: Pod "downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b": Phase="Pending", Reason="", readiness=false. Elapsed: 62.074716ms Aug 24 23:59:06.986: INFO: Pod "downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064979967s Aug 24 23:59:09.013: INFO: Pod "downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b": Phase="Running", Reason="", readiness=true. Elapsed: 4.091586926s Aug 24 23:59:11.018: INFO: Pod "downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09624495s STEP: Saw pod success Aug 24 23:59:11.018: INFO: Pod "downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b" satisfied condition "Succeeded or Failed" Aug 24 23:59:11.021: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b container client-container: STEP: delete the pod Aug 24 23:59:11.074: INFO: Waiting for pod downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b to disappear Aug 24 23:59:11.109: INFO: Pod downwardapi-volume-4b5cedb5-858d-463f-a31b-bc44de57463b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:59:11.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8283" for this suite. • [SLOW TEST:6.377 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":119,"skipped":2067,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:59:11.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 24 23:59:11.459: INFO: Waiting up to 5m0s for pod "pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730" in namespace "emptydir-7002" to be "Succeeded or Failed" Aug 24 23:59:11.536: INFO: Pod "pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730": Phase="Pending", Reason="", readiness=false. Elapsed: 77.36655ms Aug 24 23:59:13.539: INFO: Pod "pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080071816s Aug 24 23:59:15.886: INFO: Pod "pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427333825s Aug 24 23:59:17.890: INFO: Pod "pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.430815439s STEP: Saw pod success Aug 24 23:59:17.890: INFO: Pod "pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730" satisfied condition "Succeeded or Failed" Aug 24 23:59:17.892: INFO: Trying to get logs from node latest-worker2 pod pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730 container test-container: STEP: delete the pod Aug 24 23:59:18.040: INFO: Waiting for pod pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730 to disappear Aug 24 23:59:18.127: INFO: Pod pod-27fc36a6-6870-4fde-a2f1-60eee7a7e730 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 24 23:59:18.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7002" for this suite. • [SLOW TEST:7.038 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":2071,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 24 23:59:18.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 in namespace container-probe-1688 Aug 24 23:59:22.497: INFO: Started pod liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 in namespace container-probe-1688 STEP: checking the pod's current state and verifying that restartCount is present Aug 24 23:59:22.501: INFO: Initial restart count of pod liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 is 0 Aug 24 23:59:45.079: INFO: Restart count of pod container-probe-1688/liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 is now 1 (22.578153722s elapsed) Aug 25 00:00:05.157: INFO: Restart count of pod container-probe-1688/liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 is now 2 (42.656211836s elapsed) Aug 25 00:00:23.426: INFO: Restart count of pod container-probe-1688/liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 is now 3 (1m0.925167155s elapsed) Aug 25 00:00:43.708: INFO: Restart count of pod container-probe-1688/liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 is now 4 (1m21.20652067s elapsed) Aug 25 00:01:46.225: INFO: Restart count of pod container-probe-1688/liveness-7089f4fb-d846-49ea-adb0-4e26e4ab80e3 is now 5 (2m23.724012725s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:01:46.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1688" for this suite. • [SLOW TEST:148.292 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":121,"skipped":2078,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:01:46.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 25 00:01:55.386: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:01:55.445: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:01:57.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:01:57.450: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:01:59.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:01:59.450: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:02:01.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:02:01.451: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:02:03.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:02:03.451: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:02:05.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:02:05.450: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:02:07.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:02:07.450: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:02:09.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:02:09.449: INFO: Pod pod-with-poststart-http-hook still exists Aug 25 00:02:11.445: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 25 00:02:11.449: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:02:11.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9977" for this suite. • [SLOW TEST:25.006 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":122,"skipped":2144,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:02:11.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6a8fcf18-2b5f-4096-be83-111b00e1316a STEP: Creating a pod to test consume secrets Aug 25 00:02:11.570: INFO: Waiting up to 5m0s for pod "pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b" in namespace "secrets-8434" to be "Succeeded or Failed" Aug 25 00:02:11.674: INFO: Pod "pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b": Phase="Pending", Reason="", readiness=false. Elapsed: 104.331702ms Aug 25 00:02:13.678: INFO: Pod "pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107851144s Aug 25 00:02:15.682: INFO: Pod "pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112223994s Aug 25 00:02:17.998: INFO: Pod "pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.427512313s STEP: Saw pod success Aug 25 00:02:17.998: INFO: Pod "pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b" satisfied condition "Succeeded or Failed" Aug 25 00:02:18.011: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b container secret-volume-test: STEP: delete the pod Aug 25 00:02:18.184: INFO: Waiting for pod pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b to disappear Aug 25 00:02:18.233: INFO: Pod pod-secrets-d664a10b-e135-4ff9-be2c-d026cea1c52b no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:02:18.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8434" for this suite. • [SLOW TEST:6.783 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":2153,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:02:18.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 25 00:02:18.552: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3885 /api/v1/namespaces/watch-3885/configmaps/e2e-watch-test-label-changed a1cf3401-b3f0-49d7-aa4e-e3c412af4be2 3423976 0 2020-08-25 00:02:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-25 00:02:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:02:18.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3885 /api/v1/namespaces/watch-3885/configmaps/e2e-watch-test-label-changed a1cf3401-b3f0-49d7-aa4e-e3c412af4be2 3423977 0 2020-08-25 00:02:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-25 00:02:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:02:18.553: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3885 /api/v1/namespaces/watch-3885/configmaps/e2e-watch-test-label-changed a1cf3401-b3f0-49d7-aa4e-e3c412af4be2 3423978 0 2020-08-25 00:02:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-25 00:02:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 25 00:02:28.895: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3885 /api/v1/namespaces/watch-3885/configmaps/e2e-watch-test-label-changed a1cf3401-b3f0-49d7-aa4e-e3c412af4be2 3424021 0 2020-08-25 00:02:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-25 00:02:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:02:28.896: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3885 /api/v1/namespaces/watch-3885/configmaps/e2e-watch-test-label-changed a1cf3401-b3f0-49d7-aa4e-e3c412af4be2 3424022 0 2020-08-25 00:02:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-25 00:02:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:02:28.896: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3885 /api/v1/namespaces/watch-3885/configmaps/e2e-watch-test-label-changed a1cf3401-b3f0-49d7-aa4e-e3c412af4be2 3424023 0 2020-08-25 00:02:18 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-25 00:02:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:02:28.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3885" for this suite. • [SLOW TEST:10.678 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":124,"skipped":2169,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:02:28.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:02:36.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5997" for this suite. • [SLOW TEST:7.519 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":125,"skipped":2182,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:02:36.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 25 00:02:37.792: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 25 00:02:39.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:02:41.891: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733910557, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:02:44.867: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:02:44.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:02:46.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6816" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:10.417 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":126,"skipped":2197,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:02:46.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:02:46.934: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 25 00:02:50.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 create -f -' Aug 25 00:02:57.445: INFO: stderr: "" Aug 25 00:02:57.445: INFO: stdout: "e2e-test-crd-publish-openapi-7651-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 25 00:02:57.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 delete e2e-test-crd-publish-openapi-7651-crds test-foo' Aug 25 00:02:57.644: INFO: stderr: "" Aug 25 00:02:57.644: INFO: stdout: "e2e-test-crd-publish-openapi-7651-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 25 00:02:57.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 apply -f -' Aug 25 00:02:57.968: INFO: stderr: "" Aug 25 00:02:57.968: INFO: stdout: "e2e-test-crd-publish-openapi-7651-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 25 00:02:57.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 delete e2e-test-crd-publish-openapi-7651-crds test-foo' Aug 25 00:02:58.096: INFO: stderr: "" Aug 25 00:02:58.096: INFO: stdout: "e2e-test-crd-publish-openapi-7651-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 25 00:02:58.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 create -f -' Aug 25 00:02:58.382: INFO: rc: 1 Aug 25 00:02:58.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 apply -f -' Aug 25 00:02:58.661: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 25 00:02:58.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 create -f -' Aug 25 00:02:58.944: INFO: rc: 1 Aug 25 00:02:58.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3430 apply -f -' Aug 25 00:02:59.250: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 25 00:02:59.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7651-crds' Aug 25 00:02:59.544: INFO: stderr: "" Aug 25 00:02:59.544: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7651-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 25 00:02:59.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7651-crds.metadata' Aug 25 00:02:59.822: INFO: stderr: "" Aug 25 00:02:59.822: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7651-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 25 00:02:59.823: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7651-crds.spec' Aug 25 00:03:00.096: INFO: stderr: "" Aug 25 00:03:00.096: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7651-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 25 00:03:00.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7651-crds.spec.bars' Aug 25 00:03:00.368: INFO: stderr: "" Aug 25 00:03:00.368: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7651-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 25 00:03:00.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7651-crds.spec.bars2' Aug 25 00:03:00.703: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:03:03.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3430" for this suite. • [SLOW TEST:16.828 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":127,"skipped":2205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:03:03.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 25 00:03:07.860: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:03:07.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2331" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":128,"skipped":2230,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:03:07.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-1906 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1906 STEP: Deleting pre-stop pod Aug 25 00:03:21.281: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:03:21.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1906" for this suite. • [SLOW TEST:13.786 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":129,"skipped":2237,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:03:21.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Aug 25 00:03:22.291: INFO: Major version: 1 STEP: Confirm minor version Aug 25 00:03:22.291: INFO: cleanMinorVersion: 19 Aug 25 00:03:22.291: INFO: Minor version: 19+ [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:03:22.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-8978" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":130,"skipped":2268,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:03:22.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Aug 25 00:03:22.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f -' Aug 25 00:03:23.083: INFO: stderr: "" Aug 25 00:03:23.083: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Aug 25 00:03:23.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config diff -f -' Aug 25 00:03:24.266: INFO: rc: 1 Aug 25 00:03:24.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete -f -' Aug 25 00:03:24.513: INFO: stderr: "" Aug 25 00:03:24.513: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:03:24.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1484" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":131,"skipped":2273,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:03:24.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:04:24.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9826" for this suite. • [SLOW TEST:60.063 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":132,"skipped":2278,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:04:24.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 25 00:04:31.358: INFO: Successfully updated pod "labelsupdate6be03dd8-7208-4811-9b32-f36542b2310b" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:04:33.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7099" for this suite. • [SLOW TEST:8.721 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":2297,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:04:33.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:04:33.575: INFO: Create a RollingUpdate DaemonSet Aug 25 00:04:33.579: INFO: Check that daemon pods launch on every node of the cluster Aug 25 00:04:33.639: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:33.662: INFO: Number of nodes with available pods: 0 Aug 25 00:04:33.662: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:04:34.667: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:34.670: INFO: Number of nodes with available pods: 0 Aug 25 00:04:34.670: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:04:35.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:35.704: INFO: Number of nodes with available pods: 0 Aug 25 00:04:35.704: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:04:36.731: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:36.772: INFO: Number of nodes with available pods: 0 Aug 25 00:04:36.772: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:04:37.667: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:37.671: INFO: Number of nodes with available pods: 2 Aug 25 00:04:37.671: INFO: Number of running nodes: 2, number of available pods: 2 Aug 25 00:04:37.671: INFO: Update the DaemonSet to trigger a rollout Aug 25 00:04:37.680: INFO: Updating DaemonSet daemon-set Aug 25 00:04:49.776: INFO: Roll back the DaemonSet before rollout is complete Aug 25 00:04:49.794: INFO: Updating DaemonSet daemon-set Aug 25 00:04:49.794: INFO: Make sure DaemonSet rollback is complete Aug 25 00:04:49.812: INFO: Wrong image for pod: daemon-set-vk4s2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 25 00:04:49.812: INFO: Pod daemon-set-vk4s2 is not available Aug 25 00:04:49.829: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:50.855: INFO: Wrong image for pod: daemon-set-vk4s2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 25 00:04:50.855: INFO: Pod daemon-set-vk4s2 is not available Aug 25 00:04:50.859: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:51.835: INFO: Wrong image for pod: daemon-set-vk4s2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 25 00:04:51.835: INFO: Pod daemon-set-vk4s2 is not available Aug 25 00:04:51.839: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:52.835: INFO: Wrong image for pod: daemon-set-vk4s2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 25 00:04:52.835: INFO: Pod daemon-set-vk4s2 is not available Aug 25 00:04:52.849: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:53.834: INFO: Wrong image for pod: daemon-set-vk4s2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 25 00:04:53.834: INFO: Pod daemon-set-vk4s2 is not available Aug 25 00:04:53.839: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:54.833: INFO: Wrong image for pod: daemon-set-vk4s2. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 25 00:04:54.833: INFO: Pod daemon-set-vk4s2 is not available Aug 25 00:04:54.836: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:04:55.834: INFO: Pod daemon-set-l8n72 is not available Aug 25 00:04:55.839: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3758, will wait for the garbage collector to delete the pods Aug 25 00:04:55.904: INFO: Deleting DaemonSet.extensions daemon-set took: 7.01854ms Aug 25 00:04:56.405: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.286192ms Aug 25 00:05:10.108: INFO: Number of nodes with available pods: 0 Aug 25 00:05:10.108: INFO: Number of running nodes: 0, number of available pods: 0 Aug 25 00:05:10.110: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3758/daemonsets","resourceVersion":"3424832"},"items":null} Aug 25 00:05:10.113: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3758/pods","resourceVersion":"3424832"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:05:10.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3758" for this suite. • [SLOW TEST:36.715 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":134,"skipped":2306,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:05:10.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0825 00:05:11.318124 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:06:13.338: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:06:13.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1469" for this suite. • [SLOW TEST:63.221 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":135,"skipped":2308,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:06:13.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 25 00:06:13.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425075 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:06:13.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425075 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 25 00:06:23.421: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425118 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:06:23.422: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425118 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 25 00:06:33.430: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425148 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:06:33.430: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425148 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 25 00:06:43.445: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425178 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:06:43.445: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-a ea86cae0-43e4-4232-8d8c-e377db962c26 3425178 0 2020-08-25 00:06:13 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 25 00:06:53.453: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-b c97438c1-391f-43b1-983b-a842aa381066 3425204 0 2020-08-25 00:06:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:06:53.453: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-b c97438c1-391f-43b1-983b-a842aa381066 3425204 0 2020-08-25 00:06:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 25 00:07:03.712: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-b c97438c1-391f-43b1-983b-a842aa381066 3425231 0 2020-08-25 00:06:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:07:03.713: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-166 /api/v1/namespaces/watch-166/configmaps/e2e-watch-test-configmap-b c97438c1-391f-43b1-983b-a842aa381066 3425231 0 2020-08-25 00:06:53 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-25 00:06:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:07:13.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-166" for this suite. • [SLOW TEST:60.415 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":136,"skipped":2309,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:07:13.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 25 00:07:16.186: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:16.223: INFO: Number of nodes with available pods: 0 Aug 25 00:07:16.223: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:17.997: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:18.156: INFO: Number of nodes with available pods: 0 Aug 25 00:07:18.156: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:18.507: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:18.986: INFO: Number of nodes with available pods: 0 Aug 25 00:07:18.986: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:19.885: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:19.890: INFO: Number of nodes with available pods: 0 Aug 25 00:07:19.890: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:20.908: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:21.384: INFO: Number of nodes with available pods: 0 Aug 25 00:07:21.384: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:22.681: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:23.100: INFO: Number of nodes with available pods: 0 Aug 25 00:07:23.100: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:24.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:24.078: INFO: Number of nodes with available pods: 0 Aug 25 00:07:24.078: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:25.040: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:25.043: INFO: Number of nodes with available pods: 0 Aug 25 00:07:25.043: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:25.448: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:25.454: INFO: Number of nodes with available pods: 0 Aug 25 00:07:25.454: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:26.645: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:26.650: INFO: Number of nodes with available pods: 0 Aug 25 00:07:26.650: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:28.125: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:28.770: INFO: Number of nodes with available pods: 0 Aug 25 00:07:28.770: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:29.484: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:30.518: INFO: Number of nodes with available pods: 2 Aug 25 00:07:30.518: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 25 00:07:31.607: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:32.201: INFO: Number of nodes with available pods: 1 Aug 25 00:07:32.201: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:33.543: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:33.546: INFO: Number of nodes with available pods: 1 Aug 25 00:07:33.546: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:34.454: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:34.458: INFO: Number of nodes with available pods: 1 Aug 25 00:07:34.458: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:35.207: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:35.209: INFO: Number of nodes with available pods: 1 Aug 25 00:07:35.210: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:36.411: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:36.417: INFO: Number of nodes with available pods: 1 Aug 25 00:07:36.417: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:37.471: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:37.476: INFO: Number of nodes with available pods: 1 Aug 25 00:07:37.476: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:38.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:38.273: INFO: Number of nodes with available pods: 1 Aug 25 00:07:38.273: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:39.366: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:39.369: INFO: Number of nodes with available pods: 1 Aug 25 00:07:39.369: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:40.431: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:40.435: INFO: Number of nodes with available pods: 1 Aug 25 00:07:40.435: INFO: Node latest-worker is running more than one daemon pod Aug 25 00:07:41.351: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 00:07:41.355: INFO: Number of nodes with available pods: 2 Aug 25 00:07:41.355: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9872, will wait for the garbage collector to delete the pods Aug 25 00:07:41.441: INFO: Deleting DaemonSet.extensions daemon-set took: 5.614676ms Aug 25 00:07:43.941: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.500257501s Aug 25 00:07:50.995: INFO: Number of nodes with available pods: 0 Aug 25 00:07:50.995: INFO: Number of running nodes: 0, number of available pods: 0 Aug 25 00:07:50.998: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9872/daemonsets","resourceVersion":"3425404"},"items":null} Aug 25 00:07:51.265: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9872/pods","resourceVersion":"3425405"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:07:51.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9872" for this suite. • [SLOW TEST:38.139 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":137,"skipped":2318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:07:51.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-9748f784-7eaa-4f5e-8658-91819734c688 STEP: Creating a pod to test consume secrets Aug 25 00:07:52.859: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af" in namespace "projected-3231" to be "Succeeded or Failed" Aug 25 00:07:53.029: INFO: Pod "pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af": Phase="Pending", Reason="", readiness=false. Elapsed: 169.626737ms Aug 25 00:07:55.033: INFO: Pod "pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173947448s Aug 25 00:07:57.084: INFO: Pod "pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224327391s Aug 25 00:07:59.158: INFO: Pod "pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af": Phase="Running", Reason="", readiness=true. Elapsed: 6.298863078s Aug 25 00:08:01.236: INFO: Pod "pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.376617502s STEP: Saw pod success Aug 25 00:08:01.236: INFO: Pod "pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af" satisfied condition "Succeeded or Failed" Aug 25 00:08:01.239: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af container projected-secret-volume-test: STEP: delete the pod Aug 25 00:08:01.448: INFO: Waiting for pod pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af to disappear Aug 25 00:08:01.494: INFO: Pod pod-projected-secrets-326a32f8-9555-424a-889e-26fe1ae391af no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:08:01.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3231" for this suite. • [SLOW TEST:9.635 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":2381,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:08:01.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 25 00:08:06.330: INFO: Successfully updated pod "annotationupdate1cff119a-f4c2-48f8-86b7-560a942382a9" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:08:08.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5926" for this suite. • [SLOW TEST:6.973 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":139,"skipped":2392,"failed":0} SSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:08:08.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3076 STEP: creating service affinity-clusterip-transition in namespace services-3076 STEP: creating replication controller affinity-clusterip-transition in namespace services-3076 I0825 00:08:09.331852 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3076, replica count: 3 I0825 00:08:12.382275 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:08:15.382513 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:08:18.382770 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 25 00:08:18.389: INFO: Creating new exec pod Aug 25 00:08:23.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3076 execpod-affinity7d97v -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Aug 25 00:08:23.644: INFO: stderr: "I0825 00:08:23.539160 990 log.go:181] (0xc00002a000) (0xc00073a000) Create stream\nI0825 00:08:23.539212 990 log.go:181] (0xc00002a000) (0xc00073a000) Stream added, broadcasting: 1\nI0825 00:08:23.540482 990 log.go:181] (0xc00002a000) Reply frame received for 1\nI0825 00:08:23.540514 990 log.go:181] (0xc00002a000) (0xc00073a0a0) Create stream\nI0825 00:08:23.540520 990 log.go:181] (0xc00002a000) (0xc00073a0a0) Stream added, broadcasting: 3\nI0825 00:08:23.541323 990 log.go:181] (0xc00002a000) Reply frame received for 3\nI0825 00:08:23.541345 990 log.go:181] (0xc00002a000) (0xc00073a140) Create stream\nI0825 00:08:23.541351 990 log.go:181] (0xc00002a000) (0xc00073a140) Stream added, broadcasting: 5\nI0825 00:08:23.542172 990 log.go:181] (0xc00002a000) Reply frame received for 5\nI0825 00:08:23.627329 990 log.go:181] (0xc00002a000) Data frame received for 5\nI0825 00:08:23.627457 990 log.go:181] (0xc00073a140) (5) Data frame handling\nI0825 00:08:23.627531 990 log.go:181] (0xc00073a140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI0825 00:08:23.631172 990 log.go:181] (0xc00002a000) Data frame received for 5\nI0825 00:08:23.631188 990 log.go:181] (0xc00073a140) (5) Data frame handling\nI0825 00:08:23.631197 990 log.go:181] (0xc00073a140) (5) Data frame sent\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0825 00:08:23.633960 990 log.go:181] (0xc00002a000) Data frame received for 1\nI0825 00:08:23.633995 990 log.go:181] (0xc00073a000) (1) Data frame handling\nI0825 00:08:23.634009 990 log.go:181] (0xc00073a000) (1) Data frame sent\nI0825 00:08:23.634028 990 log.go:181] (0xc00002a000) (0xc00073a000) Stream removed, broadcasting: 1\nI0825 00:08:23.634061 990 log.go:181] (0xc00002a000) Data frame received for 3\nI0825 00:08:23.634086 990 log.go:181] (0xc00073a0a0) (3) Data frame handling\nI0825 00:08:23.634125 990 log.go:181] (0xc00002a000) Data frame received for 5\nI0825 00:08:23.634152 990 log.go:181] (0xc00073a140) (5) Data frame handling\nI0825 00:08:23.634176 990 log.go:181] (0xc00002a000) Go away received\nI0825 00:08:23.634308 990 log.go:181] (0xc00002a000) (0xc00073a000) Stream removed, broadcasting: 1\nI0825 00:08:23.634325 990 log.go:181] (0xc00002a000) (0xc00073a0a0) Stream removed, broadcasting: 3\nI0825 00:08:23.634332 990 log.go:181] (0xc00002a000) (0xc00073a140) Stream removed, broadcasting: 5\n" Aug 25 00:08:23.644: INFO: stdout: "" Aug 25 00:08:23.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3076 execpod-affinity7d97v -- /bin/sh -x -c nc -zv -t -w 2 10.98.134.251 80' Aug 25 00:08:23.863: INFO: stderr: "I0825 00:08:23.772937 1008 log.go:181] (0xc00093d130) (0xc000e22a00) Create stream\nI0825 00:08:23.772989 1008 log.go:181] (0xc00093d130) (0xc000e22a00) Stream added, broadcasting: 1\nI0825 00:08:23.777700 1008 log.go:181] (0xc00093d130) Reply frame received for 1\nI0825 00:08:23.777737 1008 log.go:181] (0xc00093d130) (0xc00078bea0) Create stream\nI0825 00:08:23.777747 1008 log.go:181] (0xc00093d130) (0xc00078bea0) Stream added, broadcasting: 3\nI0825 00:08:23.778552 1008 log.go:181] (0xc00093d130) Reply frame received for 3\nI0825 00:08:23.778585 1008 log.go:181] (0xc00093d130) (0xc000bde0a0) Create stream\nI0825 00:08:23.778593 1008 log.go:181] (0xc00093d130) (0xc000bde0a0) Stream added, broadcasting: 5\nI0825 00:08:23.779393 1008 log.go:181] (0xc00093d130) Reply frame received for 5\nI0825 00:08:23.854525 1008 log.go:181] (0xc00093d130) Data frame received for 3\nI0825 00:08:23.854572 1008 log.go:181] (0xc00078bea0) (3) Data frame handling\nI0825 00:08:23.854607 1008 log.go:181] (0xc00093d130) Data frame received for 5\nI0825 00:08:23.854631 1008 log.go:181] (0xc000bde0a0) (5) Data frame handling\nI0825 00:08:23.854650 1008 log.go:181] (0xc000bde0a0) (5) Data frame sent\nI0825 00:08:23.854661 1008 log.go:181] (0xc00093d130) Data frame received for 5\nI0825 00:08:23.854668 1008 log.go:181] (0xc000bde0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.134.251 80\nConnection to 10.98.134.251 80 port [tcp/http] succeeded!\nI0825 00:08:23.855757 1008 log.go:181] (0xc00093d130) Data frame received for 1\nI0825 00:08:23.855785 1008 log.go:181] (0xc000e22a00) (1) Data frame handling\nI0825 00:08:23.855797 1008 log.go:181] (0xc000e22a00) (1) Data frame sent\nI0825 00:08:23.855806 1008 log.go:181] (0xc00093d130) (0xc000e22a00) Stream removed, broadcasting: 1\nI0825 00:08:23.855910 1008 log.go:181] (0xc00093d130) Go away received\nI0825 00:08:23.856198 1008 log.go:181] (0xc00093d130) (0xc000e22a00) Stream removed, broadcasting: 1\nI0825 00:08:23.856213 1008 log.go:181] (0xc00093d130) (0xc00078bea0) Stream removed, broadcasting: 3\nI0825 00:08:23.856219 1008 log.go:181] (0xc00093d130) (0xc000bde0a0) Stream removed, broadcasting: 5\n" Aug 25 00:08:23.863: INFO: stdout: "" Aug 25 00:08:23.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3076 execpod-affinity7d97v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.134.251:80/ ; done' Aug 25 00:08:24.183: INFO: stderr: "I0825 00:08:24.000044 1026 log.go:181] (0xc000566d10) (0xc000b00500) Create stream\nI0825 00:08:24.000101 1026 log.go:181] (0xc000566d10) (0xc000b00500) Stream added, broadcasting: 1\nI0825 00:08:24.001900 1026 log.go:181] (0xc000566d10) Reply frame received for 1\nI0825 00:08:24.001922 1026 log.go:181] (0xc000566d10) (0xc0005b0780) Create stream\nI0825 00:08:24.001929 1026 log.go:181] (0xc000566d10) (0xc0005b0780) Stream added, broadcasting: 3\nI0825 00:08:24.002852 1026 log.go:181] (0xc000566d10) Reply frame received for 3\nI0825 00:08:24.002886 1026 log.go:181] (0xc000566d10) (0xc00055e5a0) Create stream\nI0825 00:08:24.002897 1026 log.go:181] (0xc000566d10) (0xc00055e5a0) Stream added, broadcasting: 5\nI0825 00:08:24.003587 1026 log.go:181] (0xc000566d10) Reply frame received for 5\nI0825 00:08:24.065708 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.065742 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.065752 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.065773 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.065781 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.065788 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.069192 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.069211 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.069234 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.069798 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.069825 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.069833 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.069846 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.069853 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.069861 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.075992 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.076010 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.076021 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.076511 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.076523 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.076530 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.076540 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.076544 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.076549 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.084398 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.084421 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.084439 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.085052 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.085072 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.085093 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.085204 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.085221 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.085241 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.092319 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.092350 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.092368 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.093010 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.093034 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.093044 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\nI0825 00:08:24.093052 1026 log.go:181] (0xc000566d10) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0825 00:08:24.093058 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.093085 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n http://10.98.134.251:80/\nI0825 00:08:24.093102 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.093110 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.093119 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.098707 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.098728 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.098745 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.099613 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.099627 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.099642 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.099669 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.099691 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.099715 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.106081 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.106108 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.106127 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.106638 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.106656 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.106666 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.106681 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.106688 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.106697 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.111400 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.111419 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.111444 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.112174 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.112210 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.112224 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.112246 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.112263 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.112294 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.121144 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.121175 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.121203 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.121695 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.121716 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.121733 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.121757 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.121774 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.121794 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.128337 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.128360 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.128374 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.128988 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.129017 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.129053 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.129105 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.129134 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.129160 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.133951 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.133981 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.133995 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.134559 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.134583 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.134597 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.134618 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.134629 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.134640 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\nI0825 00:08:24.134658 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.134668 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.134697 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\nI0825 00:08:24.141689 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.141707 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.141714 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.142557 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.142575 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.142582 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.142622 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.142685 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.142720 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.148865 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.148879 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.148887 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.149596 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.149627 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.149654 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.149665 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.149680 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.149690 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.152648 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.152663 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.152668 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.153593 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.153615 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.153638 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.153651 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.153661 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.153679 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.158701 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.158743 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.158764 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.158795 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.158819 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.158843 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.165016 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.165039 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.165050 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.165521 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.165553 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.165565 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.165587 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.165595 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.165605 1026 log.go:181] (0xc00055e5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.170163 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.170177 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.170184 1026 log.go:181] (0xc0005b0780) (3) Data frame sent\nI0825 00:08:24.171191 1026 log.go:181] (0xc000566d10) Data frame received for 5\nI0825 00:08:24.171217 1026 log.go:181] (0xc00055e5a0) (5) Data frame handling\nI0825 00:08:24.171425 1026 log.go:181] (0xc000566d10) Data frame received for 3\nI0825 00:08:24.171449 1026 log.go:181] (0xc0005b0780) (3) Data frame handling\nI0825 00:08:24.173342 1026 log.go:181] (0xc000566d10) Data frame received for 1\nI0825 00:08:24.173363 1026 log.go:181] (0xc000b00500) (1) Data frame handling\nI0825 00:08:24.173380 1026 log.go:181] (0xc000b00500) (1) Data frame sent\nI0825 00:08:24.173396 1026 log.go:181] (0xc000566d10) (0xc000b00500) Stream removed, broadcasting: 1\nI0825 00:08:24.173409 1026 log.go:181] (0xc000566d10) Go away received\nI0825 00:08:24.173822 1026 log.go:181] (0xc000566d10) (0xc000b00500) Stream removed, broadcasting: 1\nI0825 00:08:24.173847 1026 log.go:181] (0xc000566d10) (0xc0005b0780) Stream removed, broadcasting: 3\nI0825 00:08:24.173865 1026 log.go:181] (0xc000566d10) (0xc00055e5a0) Stream removed, broadcasting: 5\n" Aug 25 00:08:24.184: INFO: stdout: "\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-8zlch\naffinity-clusterip-transition-lzfn5\naffinity-clusterip-transition-lzfn5\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-lzfn5\naffinity-clusterip-transition-8zlch\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-8zlch\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-8zlch\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-lzfn5\naffinity-clusterip-transition-8zlch\naffinity-clusterip-transition-lzfn5" Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-8zlch Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-lzfn5 Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-lzfn5 Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-lzfn5 Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-8zlch Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-8zlch Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-8zlch Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-lzfn5 Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-8zlch Aug 25 00:08:24.184: INFO: Received response from host: affinity-clusterip-transition-lzfn5 Aug 25 00:08:24.191: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3076 execpod-affinity7d97v -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.134.251:80/ ; done' Aug 25 00:08:24.524: INFO: stderr: "I0825 00:08:24.323033 1042 log.go:181] (0xc000bb71e0) (0xc000bb08c0) Create stream\nI0825 00:08:24.323099 1042 log.go:181] (0xc000bb71e0) (0xc000bb08c0) Stream added, broadcasting: 1\nI0825 00:08:24.329880 1042 log.go:181] (0xc000bb71e0) Reply frame received for 1\nI0825 00:08:24.329911 1042 log.go:181] (0xc000bb71e0) (0xc000bb0960) Create stream\nI0825 00:08:24.329918 1042 log.go:181] (0xc000bb71e0) (0xc000bb0960) Stream added, broadcasting: 3\nI0825 00:08:24.331969 1042 log.go:181] (0xc000bb71e0) Reply frame received for 3\nI0825 00:08:24.332000 1042 log.go:181] (0xc000bb71e0) (0xc00062c500) Create stream\nI0825 00:08:24.332012 1042 log.go:181] (0xc000bb71e0) (0xc00062c500) Stream added, broadcasting: 5\nI0825 00:08:24.332717 1042 log.go:181] (0xc000bb71e0) Reply frame received for 5\nI0825 00:08:24.417753 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.417781 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.417801 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.417855 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.417875 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.417889 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.423411 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.423437 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.423456 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.424097 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.424133 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.424143 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.424160 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.424168 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.424175 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.428264 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.428286 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.428314 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.428720 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.428814 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.428825 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.428875 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.428902 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.428919 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.433449 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.433461 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.433467 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.434235 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.434252 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.434268 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.434352 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.434372 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.434390 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.437932 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.437962 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.437992 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.438839 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.438852 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.438864 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.438883 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.438906 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.438920 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.443718 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.443733 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.443746 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.444699 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.444818 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.444840 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.444868 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.444882 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.444900 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.451974 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.452002 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.452022 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.452573 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.452602 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.452619 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.452640 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.452650 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.452662 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.456920 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.456949 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.456966 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.457833 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.457849 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.457856 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.457867 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.457872 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.457879 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.462621 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.462639 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.462660 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.463027 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.463047 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.463055 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.463064 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.463069 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.463074 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.466601 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.466619 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.466633 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.467180 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.467199 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.467207 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.467230 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.467252 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.467264 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.472262 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.472284 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.472301 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.472651 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.472664 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.472672 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.472684 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.472694 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.472700 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.477781 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.477794 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.477801 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.478614 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.478650 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.478662 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.478678 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.478685 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.478693 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.482843 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.482872 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.482895 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.483360 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.483390 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.483409 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.483430 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.483442 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.483463 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.490913 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.490939 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.490959 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.491661 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.491689 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.491705 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.491832 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.491853 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.491872 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.497730 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.497756 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.497777 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.498743 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.498778 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.498798 1042 log.go:181] (0xc00062c500) (5) Data frame sent\nI0825 00:08:24.498815 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.498830 1042 log.go:181] (0xc00062c500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I0825 00:08:24.498850 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.498868 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.498981 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\n http://10.98.134.251:80/\nI0825 00:08:24.499026 1042 log.go:181] (0xc00062c500) (5) Data frame sent\nI0825 00:08:24.506017 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.506056 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.506075 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.506688 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.506784 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.506813 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.506841 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.506850 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.506865 1042 log.go:181] (0xc00062c500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.134.251:80/\nI0825 00:08:24.512187 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.512203 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.512218 1042 log.go:181] (0xc000bb0960) (3) Data frame sent\nI0825 00:08:24.512660 1042 log.go:181] (0xc000bb71e0) Data frame received for 5\nI0825 00:08:24.512684 1042 log.go:181] (0xc00062c500) (5) Data frame handling\nI0825 00:08:24.512705 1042 log.go:181] (0xc000bb71e0) Data frame received for 3\nI0825 00:08:24.512715 1042 log.go:181] (0xc000bb0960) (3) Data frame handling\nI0825 00:08:24.514632 1042 log.go:181] (0xc000bb71e0) Data frame received for 1\nI0825 00:08:24.514654 1042 log.go:181] (0xc000bb08c0) (1) Data frame handling\nI0825 00:08:24.514663 1042 log.go:181] (0xc000bb08c0) (1) Data frame sent\nI0825 00:08:24.514679 1042 log.go:181] (0xc000bb71e0) (0xc000bb08c0) Stream removed, broadcasting: 1\nI0825 00:08:24.514768 1042 log.go:181] (0xc000bb71e0) Go away received\nI0825 00:08:24.515003 1042 log.go:181] (0xc000bb71e0) (0xc000bb08c0) Stream removed, broadcasting: 1\nI0825 00:08:24.515017 1042 log.go:181] (0xc000bb71e0) (0xc000bb0960) Stream removed, broadcasting: 3\nI0825 00:08:24.515024 1042 log.go:181] (0xc000bb71e0) (0xc00062c500) Stream removed, broadcasting: 5\n" Aug 25 00:08:24.525: INFO: stdout: "\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg\naffinity-clusterip-transition-mh9mg" Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Received response from host: affinity-clusterip-transition-mh9mg Aug 25 00:08:24.525: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-3076, will wait for the garbage collector to delete the pods Aug 25 00:08:24.694: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.504305ms Aug 25 00:08:25.294: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.195564ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:08:41.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3076" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:33.069 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":140,"skipped":2395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:08:41.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Aug 25 00:08:41.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-555 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 25 00:08:42.064: INFO: stderr: "" Aug 25 00:08:42.064: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Aug 25 00:08:42.064: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 25 00:08:42.064: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-555" to be "running and ready, or succeeded" Aug 25 00:08:42.087: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 23.150936ms Aug 25 00:08:44.213: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149135168s Aug 25 00:08:46.297: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232880017s Aug 25 00:08:48.301: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.237067209s Aug 25 00:08:48.301: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 25 00:08:48.301: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 25 00:08:48.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-555' Aug 25 00:08:48.440: INFO: stderr: "" Aug 25 00:08:48.441: INFO: stdout: "I0825 00:08:45.916184 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/hshn 567\nI0825 00:08:46.116300 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/79f 242\nI0825 00:08:46.316325 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/qzw 583\nI0825 00:08:46.516306 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/fc7d 218\nI0825 00:08:46.716312 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/8f52 422\nI0825 00:08:46.916359 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/7ssf 517\nI0825 00:08:47.116309 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/zzh 335\nI0825 00:08:47.316317 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/57bz 461\nI0825 00:08:47.516297 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/pdcz 428\nI0825 00:08:47.716314 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/l46q 384\nI0825 00:08:47.916319 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/mdnr 481\nI0825 00:08:48.116308 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/tzp 511\nI0825 00:08:48.316329 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/zlx 205\n" STEP: limiting log lines Aug 25 00:08:48.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-555 --tail=1' Aug 25 00:08:48.539: INFO: stderr: "" Aug 25 00:08:48.539: INFO: stdout: "I0825 00:08:48.516299 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/9j27 400\n" Aug 25 00:08:48.539: INFO: got output "I0825 00:08:48.516299 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/9j27 400\n" STEP: limiting log bytes Aug 25 00:08:48.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-555 --limit-bytes=1' Aug 25 00:08:48.651: INFO: stderr: "" Aug 25 00:08:48.651: INFO: stdout: "I" Aug 25 00:08:48.651: INFO: got output "I" STEP: exposing timestamps Aug 25 00:08:48.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-555 --tail=1 --timestamps' Aug 25 00:08:48.985: INFO: stderr: "" Aug 25 00:08:48.985: INFO: stdout: "2020-08-25T00:08:48.516444743Z I0825 00:08:48.516299 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/9j27 400\n2020-08-25T00:08:48.716434460Z I0825 00:08:48.716291 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/mnl5 201\n2020-08-25T00:08:48.974033984Z I0825 00:08:48.916359 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/2mb 330\n" Aug 25 00:08:48.985: INFO: got output "2020-08-25T00:08:48.516444743Z I0825 00:08:48.516299 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/9j27 400\n2020-08-25T00:08:48.716434460Z I0825 00:08:48.716291 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/mnl5 201\n2020-08-25T00:08:48.974033984Z I0825 00:08:48.916359 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/2mb 330\n" Aug 25 00:08:48.986: FAIL: Expected : 3 to equal : 1 Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.23.3() /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 +0xa05 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001949500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x345 k8s.io/kubernetes/test/e2e.TestE2E(0xc001949500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:145 +0x2b testing.tRunner(0xc001949500, 0x4dcc9f0) /usr/local/go/src/testing/testing.go:1108 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1159 +0x386 [AfterEach] Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Aug 25 00:08:48.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-555' Aug 25 00:08:59.710: INFO: stderr: "" Aug 25 00:08:59.710: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 STEP: Collecting events from namespace "kubectl-555". STEP: Found 5 events. Aug 25 00:08:59.794: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for logs-generator: { } Scheduled: Successfully assigned kubectl-555/logs-generator to latest-worker2 Aug 25 00:08:59.794: INFO: At 2020-08-25 00:08:43 +0000 UTC - event for logs-generator: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.20" already present on machine Aug 25 00:08:59.794: INFO: At 2020-08-25 00:08:45 +0000 UTC - event for logs-generator: {kubelet latest-worker2} Created: Created container logs-generator Aug 25 00:08:59.794: INFO: At 2020-08-25 00:08:46 +0000 UTC - event for logs-generator: {kubelet latest-worker2} Started: Started container logs-generator Aug 25 00:08:59.794: INFO: At 2020-08-25 00:08:49 +0000 UTC - event for logs-generator: {kubelet latest-worker2} Killing: Stopping container logs-generator Aug 25 00:08:59.797: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:08:59.797: INFO: Aug 25 00:08:59.801: INFO: Logging node info for node latest-control-plane Aug 25 00:08:59.804: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane /api/v1/nodes/latest-control-plane e5265ef7-4fee-44e7-b227-c9d0aff11127 3425282 0 2020-08-15 09:42:01 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2020-08-15 09:42:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2020-08-15 09:42:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2020-08-25 00:07:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:16 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:16 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:16 +0000 UTC,LastTransitionTime:2020-08-15 09:41:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-25 00:07:16 +0000 UTC,LastTransitionTime:2020-08-15 09:42:31 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.12,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:355da13825784523b4a253c23edd1334,SystemUUID:8f367e0f-042b-45ff-9966-5ca6bcc1cc56,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 25 00:08:59.804: INFO: Logging kubelet events for node latest-control-plane Aug 25 00:08:59.807: INFO: Logging pods the kubelet thinks is on node latest-control-plane Aug 25 00:08:59.830: INFO: etcd-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container etcd ready: true, restart count 0 Aug 25 00:08:59.830: INFO: kube-controller-manager-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container kube-controller-manager ready: true, restart count 13 Aug 25 00:08:59.830: INFO: kube-proxy-8zfjc started at 2020-08-15 09:42:20 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container kube-proxy ready: true, restart count 0 Aug 25 00:08:59.830: INFO: local-path-provisioner-8b46957d4-csnr8 started at 2020-08-15 09:42:41 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 25 00:08:59.830: INFO: kube-apiserver-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container kube-apiserver ready: true, restart count 0 Aug 25 00:08:59.830: INFO: kube-scheduler-latest-control-plane started at 2020-08-15 09:42:12 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container kube-scheduler ready: true, restart count 6 Aug 25 00:08:59.830: INFO: kindnet-qmj2d started at 2020-08-15 09:42:20 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:08:59.830: INFO: coredns-f9fd979d6-f7hdg started at 2020-08-15 09:42:39 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.830: INFO: Container coredns ready: true, restart count 0 Aug 25 00:08:59.830: INFO: coredns-f9fd979d6-vxzgb started at 2020-08-15 09:42:40 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.831: INFO: Container coredns ready: true, restart count 0 W0825 00:08:59.837247 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:08:59.925: INFO: Latency metrics for node latest-control-plane Aug 25 00:08:59.925: INFO: Logging node info for node latest-worker Aug 25 00:08:59.945: INFO: Node Info: &Node{ObjectMeta:{latest-worker /api/v1/nodes/latest-worker 004fc98a-1b9f-43ac-98e7-5d7f7d4d062a 3425452 0 2020-08-15 09:42:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubeadm Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {e2e.test Update v1 2020-08-24 23:49:57 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2020-08-25 00:07:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:59 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:59 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:59 +0000 UTC,LastTransitionTime:2020-08-15 09:42:30 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-25 00:07:59 +0000 UTC,LastTransitionTime:2020-08-15 09:43:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.11,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4962fc9ace3b4cf98891488fcb5c4ee6,SystemUUID:b6eda539-1b1b-4e57-b392-83081398c987,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:4ba7f14019eaf22c4aa0095ebbce463fcbf2e2074f6dae826634ec7ce7a876e9],SizeBytes:117083310,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:735f090b15d5efc576da1602d8c678bf39a7605c0718ed915daec8f2297db2ff k8s.gcr.io/etcd:3.4.9],SizeBytes:86734312,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost:2.20 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:90c5222cad2b012b1c581f1bdcbd91adcf68c105ca8a7e73c63d1ed44feeca3c docker.io/aquasec/kube-hunter:latest],SizeBytes:28563877,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d7dc3a4976d3bae4597677cbe5f9105877f4287771e555cd9b5c0fbca6105db6 docker.io/aquasec/kube-bench:latest],SizeBytes:8030821,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 25 00:08:59.945: INFO: Logging kubelet events for node latest-worker Aug 25 00:08:59.949: INFO: Logging pods the kubelet thinks is on node latest-worker Aug 25 00:08:59.965: INFO: daemon-set-64t9w started at 2020-08-21 01:17:50 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.965: INFO: Container app ready: true, restart count 0 Aug 25 00:08:59.965: INFO: kube-proxy-82wrf started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.965: INFO: Container kube-proxy ready: true, restart count 0 Aug 25 00:08:59.965: INFO: kindnet-gmpqb started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 25 00:08:59.965: INFO: Container kindnet-cni ready: true, restart count 1 W0825 00:08:59.970182 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:09:00.022: INFO: Latency metrics for node latest-worker Aug 25 00:09:00.022: INFO: Logging node info for node latest-worker2 Aug 25 00:09:00.094: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 /api/v1/nodes/latest-worker2 0e8bca53-43cd-4827-990c-d232e1852e08 3425424 0 2020-08-15 09:42:29 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2020-08-15 09:42:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}},"f:labels":{"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubeadm Update v1 2020-08-15 09:42:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kubelet Update v1 2020-08-25 00:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922108928 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:54 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:54 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-08-25 00:07:54 +0000 UTC,LastTransitionTime:2020-08-15 09:42:29 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-08-25 00:07:54 +0000 UTC,LastTransitionTime:2020-08-15 09:42:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c01f9d6dc3c84901a8eec574df183c82,SystemUUID:9c567046-ce77-43e5-9100-5388d15772fe,BootID:11738d2d-5baa-4089-8e7f-2fb0329fce58,KernelVersion:4.15.0-109-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.4.0-beta.1-85-g334f567e,KubeletVersion:v1.19.0-rc.1,KubeProxyVersion:v1.19.0-rc.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:232be9c5a4400e4c5e0932fde50af8f379e3e9ddd4d3f28da6ec78c86f6ed9f6 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386367560,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:0b4d47a5161ecb6b44f6a479a27522b802096a2deea049cd6f3c01a62b585318 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360604157,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:28557b896e190c72f02121314ca7c9abaca30f91a733b566b2c44b761e5a252c docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351361235,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:257ef9011d4ff30771c0c48ef7e3b16926dce88c17d4435953f433fa9e0d731a docker.io/ollivier/clearwater-homer:latest],SizeBytes:344184630,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:eb85c150a60609d7b22b70b99d6a1a7a1c035fd64e30cca203a8b8d167bb7938 docker.io/ollivier/clearwater-astaire:latest],SizeBytes:327110542,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:95d9d53fc68c24deb2095b7b91aa7e53090f99e9c1d5c43dcf5d9a6fb8a8cdc2 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303550943,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12f377200949c25fde1e54bba639d34d119edd7cfcfb1d117526dba677c03c85 k8s.gcr.io/etcd:3.4.7 k8s.gcr.io/etcd:3.4.7-0],SizeBytes:299470271,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:861863a8f603b8851858fcb66492d5fa8af26e14ec84a26da5d75fe762b144b2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298507433,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:98347f9bf0eaf79649590e3fa0ea8d1938ae50d7703e8f9c171f0d74520ac7fb docker.io/ollivier/clearwater-homestead:latest],SizeBytes:295048084,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:adfa3978f2c94734010c014a2be7db9bc328419e0a205904543a86cd0719bd1a docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287324942,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:3e838bae03946022eae06e3d343167d4b28507909e9c17e1bf574a23b423f83d docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285384791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.19.0-rc.1],SizeBytes:137937533,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:4ba7f14019eaf22c4aa0095ebbce463fcbf2e2074f6dae826634ec7ce7a876e9],SizeBytes:117083310,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.19.0-rc.1],SizeBytes:101224746,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.19.0-rc.1],SizeBytes:87920444,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:735f090b15d5efc576da1602d8c678bf39a7605c0718ed915daec8f2297db2ff k8s.gcr.io/etcd:3.4.9],SizeBytes:86734312,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.19.0-rc.1],SizeBytes:67843882,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0 us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 k8s.gcr.io/e2e-test-images/agnhost:2.20],SizeBytes:46251412,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:77e928c23a5942aa681646be96dfb5897efe17b1e8676e8e94003ad08891b881 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:39388175,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:90c5222cad2b012b1c581f1bdcbd91adcf68c105ca8a7e73c63d1ed44feeca3c docker.io/aquasec/kube-hunter:latest],SizeBytes:28563877,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:d7dc3a4976d3bae4597677cbe5f9105877f4287771e555cd9b5c0fbca6105db6 docker.io/aquasec/kube-bench:latest],SizeBytes:8030821,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[docker.io/library/busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977 docker.io/library/busybox:latest],SizeBytes:767890,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Aug 25 00:09:00.094: INFO: Logging kubelet events for node latest-worker2 Aug 25 00:09:00.098: INFO: Logging pods the kubelet thinks is on node latest-worker2 Aug 25 00:09:00.105: INFO: kube-proxy-fjk8r started at 2020-08-15 09:42:29 +0000 UTC (0+1 container statuses recorded) Aug 25 00:09:00.105: INFO: Container kube-proxy ready: true, restart count 0 Aug 25 00:09:00.105: INFO: kindnet-grzzh started at 2020-08-15 09:42:30 +0000 UTC (0+1 container statuses recorded) Aug 25 00:09:00.105: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:09:00.105: INFO: daemon-set-jxhg7 started at 2020-08-21 01:17:50 +0000 UTC (0+1 container statuses recorded) Aug 25 00:09:00.105: INFO: Container app ready: true, restart count 0 W0825 00:09:00.112204 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:09:00.169: INFO: Latency metrics for node latest-worker2 Aug 25 00:09:00.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-555" for this suite. • Failure [18.598 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] [It] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:08:48.986: Expected : 3 to equal : 1 /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":140,"skipped":2434,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:09:00.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 25 00:09:00.570: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6810' Aug 25 00:09:00.924: INFO: stderr: "" Aug 25 00:09:00.924: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 25 00:09:01.928: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:09:01.928: INFO: Found 0 / 1 Aug 25 00:09:02.929: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:09:02.929: INFO: Found 0 / 1 Aug 25 00:09:03.932: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:09:03.932: INFO: Found 0 / 1 Aug 25 00:09:04.934: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:09:04.934: INFO: Found 1 / 1 Aug 25 00:09:04.934: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 25 00:09:04.937: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:09:04.937: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 25 00:09:04.937: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config patch pod agnhost-primary-q5fpt --namespace=kubectl-6810 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 25 00:09:05.176: INFO: stderr: "" Aug 25 00:09:05.176: INFO: stdout: "pod/agnhost-primary-q5fpt patched\n" STEP: checking annotations Aug 25 00:09:05.457: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:09:05.457: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:09:05.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6810" for this suite. • [SLOW TEST:5.350 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":141,"skipped":2436,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:09:05.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:09:05.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c" in namespace "projected-9112" to be "Succeeded or Failed" Aug 25 00:09:06.118: INFO: Pod "downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c": Phase="Pending", Reason="", readiness=false. Elapsed: 261.744577ms Aug 25 00:09:08.303: INFO: Pod "downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.447273851s Aug 25 00:09:10.501: INFO: Pod "downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.644942383s Aug 25 00:09:12.504: INFO: Pod "downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.648296793s STEP: Saw pod success Aug 25 00:09:12.504: INFO: Pod "downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c" satisfied condition "Succeeded or Failed" Aug 25 00:09:12.507: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c container client-container: STEP: delete the pod Aug 25 00:09:13.013: INFO: Waiting for pod downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c to disappear Aug 25 00:09:13.125: INFO: Pod downwardapi-volume-11ffcff9-2386-423f-8cc5-d1a39542bc8c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:09:13.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9112" for this suite. • [SLOW TEST:7.610 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":142,"skipped":2470,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:09:13.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:09:13.565: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 25 00:09:16.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7067 create -f -' Aug 25 00:09:20.318: INFO: stderr: "" Aug 25 00:09:20.318: INFO: stdout: "e2e-test-crd-publish-openapi-1438-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 25 00:09:20.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7067 delete e2e-test-crd-publish-openapi-1438-crds test-cr' Aug 25 00:09:20.467: INFO: stderr: "" Aug 25 00:09:20.467: INFO: stdout: "e2e-test-crd-publish-openapi-1438-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 25 00:09:20.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7067 apply -f -' Aug 25 00:09:20.726: INFO: stderr: "" Aug 25 00:09:20.726: INFO: stdout: "e2e-test-crd-publish-openapi-1438-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 25 00:09:20.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7067 delete e2e-test-crd-publish-openapi-1438-crds test-cr' Aug 25 00:09:20.840: INFO: stderr: "" Aug 25 00:09:20.840: INFO: stdout: "e2e-test-crd-publish-openapi-1438-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 25 00:09:20.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1438-crds' Aug 25 00:09:21.153: INFO: stderr: "" Aug 25 00:09:21.153: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1438-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:09:24.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7067" for this suite. • [SLOW TEST:11.025 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":143,"skipped":2471,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:09:24.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Aug 25 00:09:30.297: INFO: Pod pod-hostip-82255dfe-77af-4800-9919-c92017e6ebbf has hostIP: 172.18.0.11 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:09:30.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8759" for this suite. • [SLOW TEST:6.155 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2483,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:09:30.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:09:30.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329" in namespace "downward-api-9038" to be "Succeeded or Failed" Aug 25 00:09:30.423: INFO: Pod "downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329": Phase="Pending", Reason="", readiness=false. Elapsed: 39.599965ms Aug 25 00:09:32.627: INFO: Pod "downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243974015s Aug 25 00:09:34.647: INFO: Pod "downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.263915931s STEP: Saw pod success Aug 25 00:09:34.647: INFO: Pod "downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329" satisfied condition "Succeeded or Failed" Aug 25 00:09:34.651: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329 container client-container: STEP: delete the pod Aug 25 00:09:34.684: INFO: Waiting for pod downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329 to disappear Aug 25 00:09:34.693: INFO: Pod downwardapi-volume-4cc9a64a-9dbe-4a73-a329-758b1794a329 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:09:34.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9038" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":145,"skipped":2484,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:09:34.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 25 00:09:34.739: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 25 00:09:34.878: INFO: Waiting for terminating namespaces to be deleted... Aug 25 00:09:34.882: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 25 00:09:34.887: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 00:09:34.887: INFO: Container app ready: true, restart count 0 Aug 25 00:09:34.887: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:09:34.887: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:09:34.887: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:09:34.887: INFO: Container kube-proxy ready: true, restart count 0 Aug 25 00:09:34.887: INFO: pod-hostip-82255dfe-77af-4800-9919-c92017e6ebbf from pods-8759 started at 2020-08-25 00:09:24 +0000 UTC (1 container statuses recorded) Aug 25 00:09:34.887: INFO: Container test ready: true, restart count 0 Aug 25 00:09:34.887: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 25 00:09:34.892: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 00:09:34.892: INFO: Container app ready: true, restart count 0 Aug 25 00:09:34.892: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:09:34.892: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:09:34.892: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 25 00:09:34.892: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ba214958-23ca-4488-ad5a-29be32a7189a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ba214958-23ca-4488-ad5a-29be32a7189a off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ba214958-23ca-4488-ad5a-29be32a7189a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:09:45.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9822" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.070 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":146,"skipped":2495,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:09:45.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6912 STEP: creating service affinity-nodeport-transition in namespace services-6912 STEP: creating replication controller affinity-nodeport-transition in namespace services-6912 I0825 00:09:45.946806 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6912, replica count: 3 I0825 00:09:48.997217 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:09:51.997482 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:09:54.997738 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 25 00:09:55.007: INFO: Creating new exec pod Aug 25 00:10:02.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6912 execpod-affinity9gwzx -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Aug 25 00:10:02.269: INFO: stderr: "I0825 00:10:02.167942 1291 log.go:181] (0xc0009df550) (0xc000852960) Create stream\nI0825 00:10:02.168025 1291 log.go:181] (0xc0009df550) (0xc000852960) Stream added, broadcasting: 1\nI0825 00:10:02.170914 1291 log.go:181] (0xc0009df550) Reply frame received for 1\nI0825 00:10:02.170940 1291 log.go:181] (0xc0009df550) (0xc000d88140) Create stream\nI0825 00:10:02.170949 1291 log.go:181] (0xc0009df550) (0xc000d88140) Stream added, broadcasting: 3\nI0825 00:10:02.171877 1291 log.go:181] (0xc0009df550) Reply frame received for 3\nI0825 00:10:02.171946 1291 log.go:181] (0xc0009df550) (0xc0006ba460) Create stream\nI0825 00:10:02.171978 1291 log.go:181] (0xc0009df550) (0xc0006ba460) Stream added, broadcasting: 5\nI0825 00:10:02.173088 1291 log.go:181] (0xc0009df550) Reply frame received for 5\nI0825 00:10:02.250245 1291 log.go:181] (0xc0009df550) Data frame received for 5\nI0825 00:10:02.250268 1291 log.go:181] (0xc0006ba460) (5) Data frame handling\nI0825 00:10:02.250276 1291 log.go:181] (0xc0006ba460) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI0825 00:10:02.250617 1291 log.go:181] (0xc0009df550) Data frame received for 5\nI0825 00:10:02.250627 1291 log.go:181] (0xc0006ba460) (5) Data frame handling\nI0825 00:10:02.250632 1291 log.go:181] (0xc0006ba460) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0825 00:10:02.251204 1291 log.go:181] (0xc0009df550) Data frame received for 5\nI0825 00:10:02.251246 1291 log.go:181] (0xc0009df550) Data frame received for 3\nI0825 00:10:02.251291 1291 log.go:181] (0xc000d88140) (3) Data frame handling\nI0825 00:10:02.251325 1291 log.go:181] (0xc0006ba460) (5) Data frame handling\nI0825 00:10:02.253278 1291 log.go:181] (0xc0009df550) Data frame received for 1\nI0825 00:10:02.253351 1291 log.go:181] (0xc000852960) (1) Data frame handling\nI0825 00:10:02.253393 1291 log.go:181] (0xc000852960) (1) Data frame sent\nI0825 00:10:02.253418 1291 log.go:181] (0xc0009df550) (0xc000852960) Stream removed, broadcasting: 1\nI0825 00:10:02.253445 1291 log.go:181] (0xc0009df550) Go away received\nI0825 00:10:02.253945 1291 log.go:181] (0xc0009df550) (0xc000852960) Stream removed, broadcasting: 1\nI0825 00:10:02.253967 1291 log.go:181] (0xc0009df550) (0xc000d88140) Stream removed, broadcasting: 3\nI0825 00:10:02.253979 1291 log.go:181] (0xc0009df550) (0xc0006ba460) Stream removed, broadcasting: 5\n" Aug 25 00:10:02.270: INFO: stdout: "" Aug 25 00:10:02.270: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6912 execpod-affinity9gwzx -- /bin/sh -x -c nc -zv -t -w 2 10.105.108.204 80' Aug 25 00:10:02.477: INFO: stderr: "I0825 00:10:02.399411 1309 log.go:181] (0xc00039ad10) (0xc00050b9a0) Create stream\nI0825 00:10:02.399494 1309 log.go:181] (0xc00039ad10) (0xc00050b9a0) Stream added, broadcasting: 1\nI0825 00:10:02.404622 1309 log.go:181] (0xc00039ad10) Reply frame received for 1\nI0825 00:10:02.404657 1309 log.go:181] (0xc00039ad10) (0xc000378000) Create stream\nI0825 00:10:02.404665 1309 log.go:181] (0xc00039ad10) (0xc000378000) Stream added, broadcasting: 3\nI0825 00:10:02.405794 1309 log.go:181] (0xc00039ad10) Reply frame received for 3\nI0825 00:10:02.405829 1309 log.go:181] (0xc00039ad10) (0xc000208280) Create stream\nI0825 00:10:02.405840 1309 log.go:181] (0xc00039ad10) (0xc000208280) Stream added, broadcasting: 5\nI0825 00:10:02.406526 1309 log.go:181] (0xc00039ad10) Reply frame received for 5\nI0825 00:10:02.465436 1309 log.go:181] (0xc00039ad10) Data frame received for 3\nI0825 00:10:02.465458 1309 log.go:181] (0xc000378000) (3) Data frame handling\nI0825 00:10:02.466076 1309 log.go:181] (0xc00039ad10) Data frame received for 5\nI0825 00:10:02.466099 1309 log.go:181] (0xc000208280) (5) Data frame handling\nI0825 00:10:02.466120 1309 log.go:181] (0xc000208280) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.108.204 80\nConnection to 10.105.108.204 80 port [tcp/http] succeeded!\nI0825 00:10:02.466372 1309 log.go:181] (0xc00039ad10) Data frame received for 5\nI0825 00:10:02.466387 1309 log.go:181] (0xc000208280) (5) Data frame handling\nI0825 00:10:02.468058 1309 log.go:181] (0xc00039ad10) Data frame received for 1\nI0825 00:10:02.468084 1309 log.go:181] (0xc00050b9a0) (1) Data frame handling\nI0825 00:10:02.468102 1309 log.go:181] (0xc00050b9a0) (1) Data frame sent\nI0825 00:10:02.468122 1309 log.go:181] (0xc00039ad10) (0xc00050b9a0) Stream removed, broadcasting: 1\nI0825 00:10:02.468135 1309 log.go:181] (0xc00039ad10) Go away received\nI0825 00:10:02.468388 1309 log.go:181] (0xc00039ad10) (0xc00050b9a0) Stream removed, broadcasting: 1\nI0825 00:10:02.468404 1309 log.go:181] (0xc00039ad10) (0xc000378000) Stream removed, broadcasting: 3\nI0825 00:10:02.468412 1309 log.go:181] (0xc00039ad10) (0xc000208280) Stream removed, broadcasting: 5\n" Aug 25 00:10:02.477: INFO: stdout: "" Aug 25 00:10:02.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6912 execpod-affinity9gwzx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 30584' Aug 25 00:10:02.687: INFO: stderr: "I0825 00:10:02.611046 1327 log.go:181] (0xc00100b3f0) (0xc0005b8a00) Create stream\nI0825 00:10:02.611112 1327 log.go:181] (0xc00100b3f0) (0xc0005b8a00) Stream added, broadcasting: 1\nI0825 00:10:02.616203 1327 log.go:181] (0xc00100b3f0) Reply frame received for 1\nI0825 00:10:02.616257 1327 log.go:181] (0xc00100b3f0) (0xc0005b8000) Create stream\nI0825 00:10:02.616271 1327 log.go:181] (0xc00100b3f0) (0xc0005b8000) Stream added, broadcasting: 3\nI0825 00:10:02.617404 1327 log.go:181] (0xc00100b3f0) Reply frame received for 3\nI0825 00:10:02.617452 1327 log.go:181] (0xc00100b3f0) (0xc0007ac320) Create stream\nI0825 00:10:02.617463 1327 log.go:181] (0xc00100b3f0) (0xc0007ac320) Stream added, broadcasting: 5\nI0825 00:10:02.618544 1327 log.go:181] (0xc00100b3f0) Reply frame received for 5\nI0825 00:10:02.678206 1327 log.go:181] (0xc00100b3f0) Data frame received for 3\nI0825 00:10:02.678241 1327 log.go:181] (0xc0005b8000) (3) Data frame handling\nI0825 00:10:02.678318 1327 log.go:181] (0xc00100b3f0) Data frame received for 5\nI0825 00:10:02.678355 1327 log.go:181] (0xc0007ac320) (5) Data frame handling\nI0825 00:10:02.678387 1327 log.go:181] (0xc0007ac320) (5) Data frame sent\nI0825 00:10:02.678406 1327 log.go:181] (0xc00100b3f0) Data frame received for 5\nI0825 00:10:02.678419 1327 log.go:181] (0xc0007ac320) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 30584\nConnection to 172.18.0.11 30584 port [tcp/30584] succeeded!\nI0825 00:10:02.679890 1327 log.go:181] (0xc00100b3f0) Data frame received for 1\nI0825 00:10:02.679911 1327 log.go:181] (0xc0005b8a00) (1) Data frame handling\nI0825 00:10:02.679926 1327 log.go:181] (0xc0005b8a00) (1) Data frame sent\nI0825 00:10:02.679942 1327 log.go:181] (0xc00100b3f0) (0xc0005b8a00) Stream removed, broadcasting: 1\nI0825 00:10:02.679966 1327 log.go:181] (0xc00100b3f0) Go away received\nI0825 00:10:02.680299 1327 log.go:181] (0xc00100b3f0) (0xc0005b8a00) Stream removed, broadcasting: 1\nI0825 00:10:02.680317 1327 log.go:181] (0xc00100b3f0) (0xc0005b8000) Stream removed, broadcasting: 3\nI0825 00:10:02.680325 1327 log.go:181] (0xc00100b3f0) (0xc0007ac320) Stream removed, broadcasting: 5\n" Aug 25 00:10:02.688: INFO: stdout: "" Aug 25 00:10:02.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6912 execpod-affinity9gwzx -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30584' Aug 25 00:10:02.913: INFO: stderr: "I0825 00:10:02.816014 1345 log.go:181] (0xc000f3af20) (0xc0002ce640) Create stream\nI0825 00:10:02.816083 1345 log.go:181] (0xc000f3af20) (0xc0002ce640) Stream added, broadcasting: 1\nI0825 00:10:02.822268 1345 log.go:181] (0xc000f3af20) Reply frame received for 1\nI0825 00:10:02.822322 1345 log.go:181] (0xc000f3af20) (0xc0003768c0) Create stream\nI0825 00:10:02.822340 1345 log.go:181] (0xc000f3af20) (0xc0003768c0) Stream added, broadcasting: 3\nI0825 00:10:02.823263 1345 log.go:181] (0xc000f3af20) Reply frame received for 3\nI0825 00:10:02.823300 1345 log.go:181] (0xc000f3af20) (0xc000377ae0) Create stream\nI0825 00:10:02.823315 1345 log.go:181] (0xc000f3af20) (0xc000377ae0) Stream added, broadcasting: 5\nI0825 00:10:02.824096 1345 log.go:181] (0xc000f3af20) Reply frame received for 5\nI0825 00:10:02.898665 1345 log.go:181] (0xc000f3af20) Data frame received for 5\nI0825 00:10:02.898692 1345 log.go:181] (0xc000377ae0) (5) Data frame handling\nI0825 00:10:02.898702 1345 log.go:181] (0xc000377ae0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 30584\nConnection to 172.18.0.14 30584 port [tcp/30584] succeeded!\nI0825 00:10:02.898730 1345 log.go:181] (0xc000f3af20) Data frame received for 3\nI0825 00:10:02.898747 1345 log.go:181] (0xc0003768c0) (3) Data frame handling\nI0825 00:10:02.899079 1345 log.go:181] (0xc000f3af20) Data frame received for 5\nI0825 00:10:02.899089 1345 log.go:181] (0xc000377ae0) (5) Data frame handling\nI0825 00:10:02.901474 1345 log.go:181] (0xc000f3af20) Data frame received for 1\nI0825 00:10:02.901545 1345 log.go:181] (0xc0002ce640) (1) Data frame handling\nI0825 00:10:02.901571 1345 log.go:181] (0xc0002ce640) (1) Data frame sent\nI0825 00:10:02.901594 1345 log.go:181] (0xc000f3af20) (0xc0002ce640) Stream removed, broadcasting: 1\nI0825 00:10:02.901619 1345 log.go:181] (0xc000f3af20) Go away received\nI0825 00:10:02.902068 1345 log.go:181] (0xc000f3af20) (0xc0002ce640) Stream removed, broadcasting: 1\nI0825 00:10:02.902091 1345 log.go:181] (0xc000f3af20) (0xc0003768c0) Stream removed, broadcasting: 3\nI0825 00:10:02.902104 1345 log.go:181] (0xc000f3af20) (0xc000377ae0) Stream removed, broadcasting: 5\n" Aug 25 00:10:02.913: INFO: stdout: "" Aug 25 00:10:02.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6912 execpod-affinity9gwzx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30584/ ; done' Aug 25 00:10:03.231: INFO: stderr: "I0825 00:10:03.067320 1363 log.go:181] (0xc000322160) (0xc0005a01e0) Create stream\nI0825 00:10:03.067360 1363 log.go:181] (0xc000322160) (0xc0005a01e0) Stream added, broadcasting: 1\nI0825 00:10:03.069057 1363 log.go:181] (0xc000322160) Reply frame received for 1\nI0825 00:10:03.069095 1363 log.go:181] (0xc000322160) (0xc0005a0280) Create stream\nI0825 00:10:03.069111 1363 log.go:181] (0xc000322160) (0xc0005a0280) Stream added, broadcasting: 3\nI0825 00:10:03.069782 1363 log.go:181] (0xc000322160) Reply frame received for 3\nI0825 00:10:03.069810 1363 log.go:181] (0xc000322160) (0xc000789040) Create stream\nI0825 00:10:03.069819 1363 log.go:181] (0xc000322160) (0xc000789040) Stream added, broadcasting: 5\nI0825 00:10:03.070571 1363 log.go:181] (0xc000322160) Reply frame received for 5\nI0825 00:10:03.134772 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.134792 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.134798 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.134819 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.134836 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.134849 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.139648 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.139669 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.139685 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.140089 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.140101 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.140108 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.140243 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.140257 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.140265 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.146190 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.146213 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.146236 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.146928 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.146949 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.146956 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.146978 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.146997 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.147022 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.151656 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.151683 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.151705 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.152812 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.152844 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.152854 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.152867 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.152879 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.152891 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.156872 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.156886 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.156892 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.157463 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.157477 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.157482 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.157493 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.157507 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.157522 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.160690 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.160709 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.160797 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.161365 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.161375 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.161382 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.161390 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.161395 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.161402 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.164381 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.164390 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.164395 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.165282 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.165303 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.165313 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.165333 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.165351 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.165360 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.170052 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.170075 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.170092 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.170808 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.170823 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.170831 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.170840 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.170863 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.170878 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.173907 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.173922 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.173929 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.174498 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.174529 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.174541 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.174550 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.174557 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.174564 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.180039 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.180052 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.180058 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.180989 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.181005 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.181014 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.181027 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.181033 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.181048 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.186158 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.186175 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.186188 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.186900 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.186936 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.186950 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.186965 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.186975 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.186984 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.191930 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.191945 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.191968 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.192423 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.192439 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.192449 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.192461 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.192474 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.192491 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.199152 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.199186 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.199207 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.200103 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.200132 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.200144 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.200158 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.200166 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.200174 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.204538 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.204585 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.204615 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.205561 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.205581 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.205591 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.205606 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.205615 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.205624 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.210172 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.210198 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.210219 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.210946 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.210968 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.210980 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.211012 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.211038 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.211060 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.215254 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.215292 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.215312 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.216104 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.216119 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.216138 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.216160 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.216180 1363 log.go:181] (0xc000789040) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.216197 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.223003 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.223019 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.223025 1363 log.go:181] (0xc0005a0280) (3) Data frame sent\nI0825 00:10:03.224089 1363 log.go:181] (0xc000322160) Data frame received for 5\nI0825 00:10:03.224113 1363 log.go:181] (0xc000789040) (5) Data frame handling\nI0825 00:10:03.224346 1363 log.go:181] (0xc000322160) Data frame received for 3\nI0825 00:10:03.224357 1363 log.go:181] (0xc0005a0280) (3) Data frame handling\nI0825 00:10:03.226183 1363 log.go:181] (0xc000322160) Data frame received for 1\nI0825 00:10:03.226215 1363 log.go:181] (0xc0005a01e0) (1) Data frame handling\nI0825 00:10:03.226249 1363 log.go:181] (0xc0005a01e0) (1) Data frame sent\nI0825 00:10:03.226274 1363 log.go:181] (0xc000322160) (0xc0005a01e0) Stream removed, broadcasting: 1\nI0825 00:10:03.226289 1363 log.go:181] (0xc000322160) Go away received\nI0825 00:10:03.226785 1363 log.go:181] (0xc000322160) (0xc0005a01e0) Stream removed, broadcasting: 1\nI0825 00:10:03.226808 1363 log.go:181] (0xc000322160) (0xc0005a0280) Stream removed, broadcasting: 3\nI0825 00:10:03.226818 1363 log.go:181] (0xc000322160) (0xc000789040) Stream removed, broadcasting: 5\n" Aug 25 00:10:03.232: INFO: stdout: "\naffinity-nodeport-transition-x4kw2\naffinity-nodeport-transition-x4kw2\naffinity-nodeport-transition-48g8q\naffinity-nodeport-transition-48g8q\naffinity-nodeport-transition-48g8q\naffinity-nodeport-transition-48g8q\naffinity-nodeport-transition-x4kw2\naffinity-nodeport-transition-48g8q\naffinity-nodeport-transition-48g8q\naffinity-nodeport-transition-x4kw2\naffinity-nodeport-transition-x4kw2\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-48g8q\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j" Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-x4kw2 Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-x4kw2 Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-48g8q Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-48g8q Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-48g8q Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-48g8q Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-x4kw2 Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-48g8q Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-48g8q Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-x4kw2 Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-x4kw2 Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-48g8q Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.232: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-6912 execpod-affinity9gwzx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:30584/ ; done' Aug 25 00:10:03.664: INFO: stderr: "I0825 00:10:03.459686 1374 log.go:181] (0xc00003a580) (0xc0006ce280) Create stream\nI0825 00:10:03.459744 1374 log.go:181] (0xc00003a580) (0xc0006ce280) Stream added, broadcasting: 1\nI0825 00:10:03.461607 1374 log.go:181] (0xc00003a580) Reply frame received for 1\nI0825 00:10:03.461640 1374 log.go:181] (0xc00003a580) (0xc000aae000) Create stream\nI0825 00:10:03.461648 1374 log.go:181] (0xc00003a580) (0xc000aae000) Stream added, broadcasting: 3\nI0825 00:10:03.462428 1374 log.go:181] (0xc00003a580) Reply frame received for 3\nI0825 00:10:03.462481 1374 log.go:181] (0xc00003a580) (0xc000d3a000) Create stream\nI0825 00:10:03.462491 1374 log.go:181] (0xc00003a580) (0xc000d3a000) Stream added, broadcasting: 5\nI0825 00:10:03.463211 1374 log.go:181] (0xc00003a580) Reply frame received for 5\nI0825 00:10:03.542288 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.542319 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.542330 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.542355 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.542364 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.542372 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.544566 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.544580 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.544593 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.545083 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.545096 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.545107 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.545130 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.545167 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.545184 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.549938 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.549953 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.549967 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.550933 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.550946 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.550954 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.550972 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.550996 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.551024 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.554459 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.554479 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.554492 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.554860 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.554880 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.554893 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.554908 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.554915 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.554922 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.559822 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.559837 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.559851 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.560558 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.560580 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.560589 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.560601 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.560607 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.560614 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.565117 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.565140 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.565161 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.565567 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.565599 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.565618 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.565640 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.565659 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.565676 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.570768 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.570784 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.570796 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.571286 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.571297 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.571303 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.571327 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.571354 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.571372 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.575566 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.575577 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.575583 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.576137 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.576154 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.576162 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.576184 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.576207 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.576230 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.581345 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.581368 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.581380 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.585354 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.585479 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.585534 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.585557 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.585578 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.585611 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.592868 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.592938 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.592959 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.593277 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.593292 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.593310 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.593337 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.593353 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.593364 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.599230 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.599283 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.599300 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.601186 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.601215 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.601226 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.601247 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.601263 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.601272 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.606829 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.606847 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.606862 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.607266 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.607288 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.607298 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.607310 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.607319 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.607335 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.613759 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.613776 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.613788 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.613966 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.613981 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.613996 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.614099 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.614112 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.614118 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.618701 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.618723 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.618748 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.623581 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.623600 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.623610 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.623623 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.623637 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.623644 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.628899 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.628919 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.628934 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.629729 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.629749 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.629766 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.629976 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.629999 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.630033 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.633732 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.633749 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.633762 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.634488 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.634506 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.634521 1374 log.go:181] (0xc000d3a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:30584/\nI0825 00:10:03.634840 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.634855 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.634868 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.651258 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.651283 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.651299 1374 log.go:181] (0xc000aae000) (3) Data frame sent\nI0825 00:10:03.651308 1374 log.go:181] (0xc00003a580) Data frame received for 3\nI0825 00:10:03.651317 1374 log.go:181] (0xc000aae000) (3) Data frame handling\nI0825 00:10:03.651366 1374 log.go:181] (0xc00003a580) Data frame received for 5\nI0825 00:10:03.651378 1374 log.go:181] (0xc000d3a000) (5) Data frame handling\nI0825 00:10:03.652863 1374 log.go:181] (0xc00003a580) Data frame received for 1\nI0825 00:10:03.652879 1374 log.go:181] (0xc0006ce280) (1) Data frame handling\nI0825 00:10:03.652890 1374 log.go:181] (0xc0006ce280) (1) Data frame sent\nI0825 00:10:03.652898 1374 log.go:181] (0xc00003a580) (0xc0006ce280) Stream removed, broadcasting: 1\nI0825 00:10:03.653153 1374 log.go:181] (0xc00003a580) Go away received\nI0825 00:10:03.653219 1374 log.go:181] (0xc00003a580) (0xc0006ce280) Stream removed, broadcasting: 1\nI0825 00:10:03.653238 1374 log.go:181] (0xc00003a580) (0xc000aae000) Stream removed, broadcasting: 3\nI0825 00:10:03.653246 1374 log.go:181] (0xc00003a580) (0xc000d3a000) Stream removed, broadcasting: 5\n" Aug 25 00:10:03.664: INFO: stdout: "\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j\naffinity-nodeport-transition-72w2j" Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Received response from host: affinity-nodeport-transition-72w2j Aug 25 00:10:03.664: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6912, will wait for the garbage collector to delete the pods Aug 25 00:10:03.769: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.680818ms Aug 25 00:10:04.169: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.198893ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:10:20.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6912" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:34.632 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":147,"skipped":2533,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:10:20.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:10:21.658: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:10:23.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:10:25.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911021, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:10:28.795: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:10:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6108" for this suite. STEP: Destroying namespace "webhook-6108-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.206 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":148,"skipped":2537,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:10:29.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:10:30.848: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:10:32.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911030, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911030, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911031, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911030, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:10:35.925: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:10:36.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-587" for this suite. STEP: Destroying namespace "webhook-587-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.658 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":149,"skipped":2538,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:10:36.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:10:36.405: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:10:37.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-751" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":150,"skipped":2551,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:10:37.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Aug 25 00:10:37.495: INFO: Waiting up to 5m0s for pod "pod-3a3ebd14-74fb-47fc-8214-666e5e78049f" in namespace "emptydir-7590" to be "Succeeded or Failed" Aug 25 00:10:37.707: INFO: Pod "pod-3a3ebd14-74fb-47fc-8214-666e5e78049f": Phase="Pending", Reason="", readiness=false. Elapsed: 212.262785ms Aug 25 00:10:39.946: INFO: Pod "pod-3a3ebd14-74fb-47fc-8214-666e5e78049f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.450822235s Aug 25 00:10:41.955: INFO: Pod "pod-3a3ebd14-74fb-47fc-8214-666e5e78049f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.45964139s STEP: Saw pod success Aug 25 00:10:41.955: INFO: Pod "pod-3a3ebd14-74fb-47fc-8214-666e5e78049f" satisfied condition "Succeeded or Failed" Aug 25 00:10:41.956: INFO: Trying to get logs from node latest-worker pod pod-3a3ebd14-74fb-47fc-8214-666e5e78049f container test-container: STEP: delete the pod Aug 25 00:10:42.042: INFO: Waiting for pod pod-3a3ebd14-74fb-47fc-8214-666e5e78049f to disappear Aug 25 00:10:42.062: INFO: Pod pod-3a3ebd14-74fb-47fc-8214-666e5e78049f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:10:42.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7590" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2563,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:10:42.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 25 00:10:49.505: INFO: 10 pods remaining Aug 25 00:10:49.505: INFO: 10 pods has nil DeletionTimestamp Aug 25 00:10:49.505: INFO: Aug 25 00:10:50.754: INFO: 0 pods remaining Aug 25 00:10:50.754: INFO: 0 pods has nil DeletionTimestamp Aug 25 00:10:50.754: INFO: Aug 25 00:10:51.769: INFO: 0 pods remaining Aug 25 00:10:51.769: INFO: 0 pods has nil DeletionTimestamp Aug 25 00:10:51.769: INFO: Aug 25 00:10:52.260: INFO: 0 pods remaining Aug 25 00:10:52.260: INFO: 0 pods has nil DeletionTimestamp Aug 25 00:10:52.260: INFO: STEP: Gathering metrics W0825 00:10:53.471766 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:11:55.799: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:11:55.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9504" for this suite. • [SLOW TEST:73.695 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":152,"skipped":2590,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:11:55.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:11:57.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1339" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":153,"skipped":2620,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:11:57.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:11:58.103: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:12:00.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911118, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911118, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911118, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911118, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:12:03.193: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:12:03.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-151" for this suite. STEP: Destroying namespace "webhook-151-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.356 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":154,"skipped":2649,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:12:03.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-9593 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9593 to expose endpoints map[] Aug 25 00:12:03.707: INFO: successfully validated that service multi-endpoint-test in namespace services-9593 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9593 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9593 to expose endpoints map[pod1:[100]] Aug 25 00:12:07.863: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]], will retry Aug 25 00:12:09.815: INFO: successfully validated that service multi-endpoint-test in namespace services-9593 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-9593 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9593 to expose endpoints map[pod1:[100] pod2:[101]] Aug 25 00:12:14.116: INFO: Unexpected endpoints: found map[164e15c0-fc05-4a08-89e4-c29100f92cb8:[100]], expected map[pod1:[100] pod2:[101]], will retry Aug 25 00:12:18.950: INFO: successfully validated that service multi-endpoint-test in namespace services-9593 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-9593 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9593 to expose endpoints map[pod2:[101]] Aug 25 00:12:19.044: INFO: successfully validated that service multi-endpoint-test in namespace services-9593 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-9593 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9593 to expose endpoints map[] Aug 25 00:12:20.059: INFO: successfully validated that service multi-endpoint-test in namespace services-9593 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:12:20.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9593" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:16.574 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":155,"skipped":2656,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:12:20.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-b6a4b733-15a9-4bf4-8db5-c169d5e6ab57 STEP: Creating a pod to test consume secrets Aug 25 00:12:20.679: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459" in namespace "projected-2484" to be "Succeeded or Failed" Aug 25 00:12:20.786: INFO: Pod "pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459": Phase="Pending", Reason="", readiness=false. Elapsed: 106.343836ms Aug 25 00:12:22.999: INFO: Pod "pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31947448s Aug 25 00:12:25.049: INFO: Pod "pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459": Phase="Pending", Reason="", readiness=false. Elapsed: 4.369573307s Aug 25 00:12:27.102: INFO: Pod "pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.423027915s STEP: Saw pod success Aug 25 00:12:27.102: INFO: Pod "pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459" satisfied condition "Succeeded or Failed" Aug 25 00:12:27.105: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459 container projected-secret-volume-test: STEP: delete the pod Aug 25 00:12:27.148: INFO: Waiting for pod pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459 to disappear Aug 25 00:12:27.162: INFO: Pod pod-projected-secrets-a2679345-6bb7-46c7-911b-c82f99056459 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:12:27.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2484" for this suite. • [SLOW TEST:7.057 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":156,"skipped":2666,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:12:27.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 25 00:12:27.513: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Aug 25 00:12:27.997: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 25 00:12:31.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911147, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:12:33.242: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911147, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:12:35.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911148, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911147, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:12:37.962: INFO: Waited 718.516883ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:12:42.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4281" for this suite. • [SLOW TEST:15.229 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":157,"skipped":2690,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:12:42.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3f8acfb5-30fe-4323-840d-a67660686dd8 STEP: Creating a pod to test consume configMaps Aug 25 00:12:42.891: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1" in namespace "configmap-312" to be "Succeeded or Failed" Aug 25 00:12:42.953: INFO: Pod "pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1": Phase="Pending", Reason="", readiness=false. Elapsed: 62.206954ms Aug 25 00:12:44.957: INFO: Pod "pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065709358s Aug 25 00:12:46.961: INFO: Pod "pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070248086s Aug 25 00:12:49.046: INFO: Pod "pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1": Phase="Running", Reason="", readiness=true. Elapsed: 6.15477776s Aug 25 00:12:51.050: INFO: Pod "pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.159283248s STEP: Saw pod success Aug 25 00:12:51.050: INFO: Pod "pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1" satisfied condition "Succeeded or Failed" Aug 25 00:12:51.054: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1 container configmap-volume-test: STEP: delete the pod Aug 25 00:12:51.081: INFO: Waiting for pod pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1 to disappear Aug 25 00:12:51.109: INFO: Pod pod-configmaps-6ddbcf6d-1985-4594-8e77-acb8d698a4d1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:12:51.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-312" for this suite. • [SLOW TEST:8.713 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2732,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:12:51.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:12:51.805: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:12:53.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911171, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911171, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911172, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911171, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:12:55.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911171, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911171, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911172, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911171, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:12:58.923: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:13:11.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9826" for this suite. STEP: Destroying namespace "webhook-9826-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.199 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":159,"skipped":2748,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:13:12.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775 Aug 25 00:13:12.582: INFO: Pod name my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775: Found 0 pods out of 1 Aug 25 00:13:17.657: INFO: Pod name my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775: Found 1 pods out of 1 Aug 25 00:13:17.657: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775" are running Aug 25 00:13:19.662: INFO: Pod "my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775-cm68d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:13:13 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:13:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:13:13 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:13:12 +0000 UTC Reason: Message:}]) Aug 25 00:13:19.662: INFO: Trying to dial the pod Aug 25 00:13:24.671: INFO: Controller my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775: Got expected result from replica 1 [my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775-cm68d]: "my-hostname-basic-90c5f22d-757a-42b2-a69d-3d62a16ab775-cm68d", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:13:24.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3720" for this suite. • [SLOW TEST:12.361 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":160,"skipped":2780,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:13:24.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Aug 25 00:13:24.911: INFO: created test-event-1 Aug 25 00:13:24.936: INFO: created test-event-2 Aug 25 00:13:24.956: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Aug 25 00:13:24.964: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Aug 25 00:13:24.981: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:13:24.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2725" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":161,"skipped":2785,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:13:24.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6b2f4a95-e77e-4d5e-99c5-32c022881b9b STEP: Creating a pod to test consume secrets Aug 25 00:13:25.198: INFO: Waiting up to 5m0s for pod "pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44" in namespace "secrets-8026" to be "Succeeded or Failed" Aug 25 00:13:25.242: INFO: Pod "pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44": Phase="Pending", Reason="", readiness=false. Elapsed: 43.658136ms Aug 25 00:13:27.271: INFO: Pod "pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072447885s Aug 25 00:13:29.325: INFO: Pod "pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126327648s Aug 25 00:13:31.329: INFO: Pod "pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130481448s STEP: Saw pod success Aug 25 00:13:31.329: INFO: Pod "pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44" satisfied condition "Succeeded or Failed" Aug 25 00:13:31.332: INFO: Trying to get logs from node latest-worker pod pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44 container secret-volume-test: STEP: delete the pod Aug 25 00:13:31.377: INFO: Waiting for pod pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44 to disappear Aug 25 00:13:31.394: INFO: Pod pod-secrets-7fb6b5c3-6700-473d-a603-89c6d3105d44 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:13:31.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8026" for this suite. • [SLOW TEST:6.463 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":162,"skipped":2797,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:13:31.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 25 00:13:31.624: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 25 00:13:36.639: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:13:36.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3031" for this suite. • [SLOW TEST:5.339 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":163,"skipped":2805,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:13:36.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-f7d47353-10d1-44ae-8d57-cf6e06f62278 in namespace container-probe-7611 Aug 25 00:13:45.039: INFO: Started pod busybox-f7d47353-10d1-44ae-8d57-cf6e06f62278 in namespace container-probe-7611 STEP: checking the pod's current state and verifying that restartCount is present Aug 25 00:13:45.042: INFO: Initial restart count of pod busybox-f7d47353-10d1-44ae-8d57-cf6e06f62278 is 0 Aug 25 00:14:32.811: INFO: Restart count of pod container-probe-7611/busybox-f7d47353-10d1-44ae-8d57-cf6e06f62278 is now 1 (47.768586664s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:14:33.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7611" for this suite. • [SLOW TEST:56.657 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2814,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:14:33.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-33ee0a6e-89c1-424f-bc9c-4d470e216662 STEP: Creating configMap with name cm-test-opt-upd-18a7fdb0-eaf0-46b6-9c1c-e4d78a6c068e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-33ee0a6e-89c1-424f-bc9c-4d470e216662 STEP: Updating configmap cm-test-opt-upd-18a7fdb0-eaf0-46b6-9c1c-e4d78a6c068e STEP: Creating configMap with name cm-test-opt-create-dcf67fa2-66ec-4999-97c5-e3ba5c686608 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:14:46.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3763" for this suite. • [SLOW TEST:13.357 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":165,"skipped":2839,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:14:46.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 25 00:14:48.098: INFO: Pod name wrapped-volume-race-4de31199-6f2a-4247-9837-fc364d94069a: Found 0 pods out of 5 Aug 25 00:14:53.114: INFO: Pod name wrapped-volume-race-4de31199-6f2a-4247-9837-fc364d94069a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4de31199-6f2a-4247-9837-fc364d94069a in namespace emptydir-wrapper-6333, will wait for the garbage collector to delete the pods Aug 25 00:15:13.539: INFO: Deleting ReplicationController wrapped-volume-race-4de31199-6f2a-4247-9837-fc364d94069a took: 45.560221ms Aug 25 00:15:13.939: INFO: Terminating ReplicationController wrapped-volume-race-4de31199-6f2a-4247-9837-fc364d94069a pods took: 400.201259ms STEP: Creating RC which spawns configmap-volume pods Aug 25 00:15:30.793: INFO: Pod name wrapped-volume-race-8b2439fa-b775-4ac3-aef4-e817c1cfca9e: Found 0 pods out of 5 Aug 25 00:15:35.813: INFO: Pod name wrapped-volume-race-8b2439fa-b775-4ac3-aef4-e817c1cfca9e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8b2439fa-b775-4ac3-aef4-e817c1cfca9e in namespace emptydir-wrapper-6333, will wait for the garbage collector to delete the pods Aug 25 00:15:53.958: INFO: Deleting ReplicationController wrapped-volume-race-8b2439fa-b775-4ac3-aef4-e817c1cfca9e took: 8.016751ms Aug 25 00:15:54.658: INFO: Terminating ReplicationController wrapped-volume-race-8b2439fa-b775-4ac3-aef4-e817c1cfca9e pods took: 700.229526ms STEP: Creating RC which spawns configmap-volume pods Aug 25 00:16:12.028: INFO: Pod name wrapped-volume-race-17b476fd-42f7-4728-bedd-33f348903dfa: Found 0 pods out of 5 Aug 25 00:16:17.379: INFO: Pod name wrapped-volume-race-17b476fd-42f7-4728-bedd-33f348903dfa: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-17b476fd-42f7-4728-bedd-33f348903dfa in namespace emptydir-wrapper-6333, will wait for the garbage collector to delete the pods Aug 25 00:16:39.314: INFO: Deleting ReplicationController wrapped-volume-race-17b476fd-42f7-4728-bedd-33f348903dfa took: 202.192634ms Aug 25 00:16:40.414: INFO: Terminating ReplicationController wrapped-volume-race-17b476fd-42f7-4728-bedd-33f348903dfa pods took: 1.100197684s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:17:05.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6333" for this suite. • [SLOW TEST:139.330 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":166,"skipped":2852,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:17:06.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 25 00:17:21.491: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 25 00:17:21.537: INFO: Pod pod-with-prestop-exec-hook still exists Aug 25 00:17:23.537: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 25 00:17:23.579: INFO: Pod pod-with-prestop-exec-hook still exists Aug 25 00:17:25.540: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 25 00:17:25.548: INFO: Pod pod-with-prestop-exec-hook still exists Aug 25 00:17:27.537: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 25 00:17:27.560: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:17:27.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5818" for this suite. • [SLOW TEST:21.545 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2856,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:17:27.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3543 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3543 STEP: creating replication controller externalsvc in namespace services-3543 I0825 00:17:28.538953 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3543, replica count: 2 I0825 00:17:31.589328 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:17:34.589553 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 25 00:17:34.753: INFO: Creating new exec pod Aug 25 00:17:41.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod7fl88 -- /bin/sh -x -c nslookup nodeport-service.services-3543.svc.cluster.local' Aug 25 00:17:41.317: INFO: stderr: "I0825 00:17:41.206690 1389 log.go:181] (0xc00003a0b0) (0xc000b84000) Create stream\nI0825 00:17:41.206738 1389 log.go:181] (0xc00003a0b0) (0xc000b84000) Stream added, broadcasting: 1\nI0825 00:17:41.209230 1389 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0825 00:17:41.209262 1389 log.go:181] (0xc00003a0b0) (0xc000e92000) Create stream\nI0825 00:17:41.209271 1389 log.go:181] (0xc00003a0b0) (0xc000e92000) Stream added, broadcasting: 3\nI0825 00:17:41.210228 1389 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0825 00:17:41.210271 1389 log.go:181] (0xc00003a0b0) (0xc000b840a0) Create stream\nI0825 00:17:41.210283 1389 log.go:181] (0xc00003a0b0) (0xc000b840a0) Stream added, broadcasting: 5\nI0825 00:17:41.211434 1389 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0825 00:17:41.303379 1389 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0825 00:17:41.303407 1389 log.go:181] (0xc000b840a0) (5) Data frame handling\nI0825 00:17:41.303415 1389 log.go:181] (0xc000b840a0) (5) Data frame sent\n+ nslookup nodeport-service.services-3543.svc.cluster.local\nI0825 00:17:41.307928 1389 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0825 00:17:41.307942 1389 log.go:181] (0xc000e92000) (3) Data frame handling\nI0825 00:17:41.307952 1389 log.go:181] (0xc000e92000) (3) Data frame sent\nI0825 00:17:41.308634 1389 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0825 00:17:41.308647 1389 log.go:181] (0xc000e92000) (3) Data frame handling\nI0825 00:17:41.308657 1389 log.go:181] (0xc000e92000) (3) Data frame sent\nI0825 00:17:41.309193 1389 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0825 00:17:41.309235 1389 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0825 00:17:41.309266 1389 log.go:181] (0xc000b840a0) (5) Data frame handling\nI0825 00:17:41.309286 1389 log.go:181] (0xc000e92000) (3) Data frame handling\nI0825 00:17:41.310861 1389 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0825 00:17:41.310870 1389 log.go:181] (0xc000b84000) (1) Data frame handling\nI0825 00:17:41.310878 1389 log.go:181] (0xc000b84000) (1) Data frame sent\nI0825 00:17:41.310886 1389 log.go:181] (0xc00003a0b0) (0xc000b84000) Stream removed, broadcasting: 1\nI0825 00:17:41.310920 1389 log.go:181] (0xc00003a0b0) Go away received\nI0825 00:17:41.311131 1389 log.go:181] (0xc00003a0b0) (0xc000b84000) Stream removed, broadcasting: 1\nI0825 00:17:41.311143 1389 log.go:181] (0xc00003a0b0) (0xc000e92000) Stream removed, broadcasting: 3\nI0825 00:17:41.311150 1389 log.go:181] (0xc00003a0b0) (0xc000b840a0) Stream removed, broadcasting: 5\n" Aug 25 00:17:41.317: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3543.svc.cluster.local\tcanonical name = externalsvc.services-3543.svc.cluster.local.\nName:\texternalsvc.services-3543.svc.cluster.local\nAddress: 10.101.161.254\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3543, will wait for the garbage collector to delete the pods Aug 25 00:17:41.377: INFO: Deleting ReplicationController externalsvc took: 7.029993ms Aug 25 00:17:41.777: INFO: Terminating ReplicationController externalsvc pods took: 400.254469ms Aug 25 00:17:50.437: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:17:50.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3543" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.003 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":168,"skipped":2862,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:17:50.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 25 00:17:50.994: INFO: Waiting up to 5m0s for pod "downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2" in namespace "downward-api-9265" to be "Succeeded or Failed" Aug 25 00:17:50.998: INFO: Pod "downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459327ms Aug 25 00:17:53.006: INFO: Pod "downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011953749s Aug 25 00:17:55.090: INFO: Pod "downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096349083s Aug 25 00:17:57.093: INFO: Pod "downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099249747s STEP: Saw pod success Aug 25 00:17:57.093: INFO: Pod "downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2" satisfied condition "Succeeded or Failed" Aug 25 00:17:57.095: INFO: Trying to get logs from node latest-worker2 pod downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2 container dapi-container: STEP: delete the pod Aug 25 00:17:57.201: INFO: Waiting for pod downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2 to disappear Aug 25 00:17:57.221: INFO: Pod downward-api-31b29e6f-b4c5-4210-98cc-ccccd8bfcdf2 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:17:57.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9265" for this suite. • [SLOW TEST:6.587 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2879,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:17:57.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:17:57.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:17:59.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:18:01.976: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733911477, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:18:04.968: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:18:04.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9107-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:18:06.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5802" for this suite. STEP: Destroying namespace "webhook-5802-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.995 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":170,"skipped":2880,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:18:06.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-44454dd4-b93c-457a-986f-69208302e79b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-44454dd4-b93c-457a-986f-69208302e79b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:19:17.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6220" for this suite. • [SLOW TEST:70.838 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2886,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:19:17.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Aug 25 00:19:17.443: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:19:17.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-589" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":172,"skipped":2891,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:19:17.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:19:18.253: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1" in namespace "security-context-test-7171" to be "Succeeded or Failed" Aug 25 00:19:18.256: INFO: Pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972996ms Aug 25 00:19:20.260: INFO: Pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007511309s Aug 25 00:19:22.543: INFO: Pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290703112s Aug 25 00:19:25.475: INFO: Pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.222036649s Aug 25 00:19:28.122: INFO: Pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.869044349s Aug 25 00:19:28.122: INFO: Pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1" satisfied condition "Succeeded or Failed" Aug 25 00:19:28.229: INFO: Got logs for pod "busybox-privileged-false-a045facc-38c1-4de7-9e10-ecdf362d80a1": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:19:28.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7171" for this suite. • [SLOW TEST:11.340 seconds] [k8s.io] Security Context /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with privileged /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":173,"skipped":2904,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:19:28.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:20:13.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9209" for this suite. • [SLOW TEST:44.239 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":2906,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:20:13.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:20:25.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1772" for this suite. • [SLOW TEST:12.549 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":175,"skipped":2991,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:20:25.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-2dccfc7d-7a11-4c60-bbdf-285fb43dc3b8 STEP: Creating a pod to test consume configMaps Aug 25 00:20:26.541: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1" in namespace "projected-9845" to be "Succeeded or Failed" Aug 25 00:20:26.557: INFO: Pod "pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.023476ms Aug 25 00:20:28.734: INFO: Pod "pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192369837s Aug 25 00:20:31.080: INFO: Pod "pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53832073s Aug 25 00:20:33.112: INFO: Pod "pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1": Phase="Running", Reason="", readiness=true. Elapsed: 6.570074075s Aug 25 00:20:35.188: INFO: Pod "pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.646663072s STEP: Saw pod success Aug 25 00:20:35.188: INFO: Pod "pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1" satisfied condition "Succeeded or Failed" Aug 25 00:20:35.191: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1 container projected-configmap-volume-test: STEP: delete the pod Aug 25 00:20:35.381: INFO: Waiting for pod pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1 to disappear Aug 25 00:20:35.441: INFO: Pod pod-projected-configmaps-2ae86f03-8c0b-46cd-a02d-eae8876540e1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:20:35.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9845" for this suite. • [SLOW TEST:10.001 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":2991,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:20:35.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4129 STEP: creating service affinity-clusterip in namespace services-4129 STEP: creating replication controller affinity-clusterip in namespace services-4129 I0825 00:20:37.379666 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4129, replica count: 3 I0825 00:20:40.430044 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:20:43.430317 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:20:46.430562 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:20:49.430668 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:20:52.430883 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 25 00:20:52.437: INFO: Creating new exec pod Aug 25 00:21:01.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4129 execpod-affinity6nh78 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Aug 25 00:21:20.755: INFO: stderr: "I0825 00:21:20.677319 1422 log.go:181] (0xc0001ce0b0) (0xc000c8e140) Create stream\nI0825 00:21:20.677403 1422 log.go:181] (0xc0001ce0b0) (0xc000c8e140) Stream added, broadcasting: 1\nI0825 00:21:20.679612 1422 log.go:181] (0xc0001ce0b0) Reply frame received for 1\nI0825 00:21:20.679676 1422 log.go:181] (0xc0001ce0b0) (0xc000c8e1e0) Create stream\nI0825 00:21:20.679710 1422 log.go:181] (0xc0001ce0b0) (0xc000c8e1e0) Stream added, broadcasting: 3\nI0825 00:21:20.680554 1422 log.go:181] (0xc0001ce0b0) Reply frame received for 3\nI0825 00:21:20.680612 1422 log.go:181] (0xc0001ce0b0) (0xc000a58000) Create stream\nI0825 00:21:20.680637 1422 log.go:181] (0xc0001ce0b0) (0xc000a58000) Stream added, broadcasting: 5\nI0825 00:21:20.681765 1422 log.go:181] (0xc0001ce0b0) Reply frame received for 5\nI0825 00:21:20.742461 1422 log.go:181] (0xc0001ce0b0) Data frame received for 5\nI0825 00:21:20.742492 1422 log.go:181] (0xc000a58000) (5) Data frame handling\nI0825 00:21:20.742510 1422 log.go:181] (0xc000a58000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0825 00:21:20.742609 1422 log.go:181] (0xc0001ce0b0) Data frame received for 5\nI0825 00:21:20.742641 1422 log.go:181] (0xc000a58000) (5) Data frame handling\nI0825 00:21:20.742665 1422 log.go:181] (0xc000a58000) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0825 00:21:20.742984 1422 log.go:181] (0xc0001ce0b0) Data frame received for 3\nI0825 00:21:20.743002 1422 log.go:181] (0xc000c8e1e0) (3) Data frame handling\nI0825 00:21:20.743040 1422 log.go:181] (0xc0001ce0b0) Data frame received for 5\nI0825 00:21:20.743067 1422 log.go:181] (0xc000a58000) (5) Data frame handling\nI0825 00:21:20.746054 1422 log.go:181] (0xc0001ce0b0) Data frame received for 1\nI0825 00:21:20.746070 1422 log.go:181] (0xc000c8e140) (1) Data frame handling\nI0825 00:21:20.746090 1422 log.go:181] (0xc000c8e140) (1) Data frame sent\nI0825 00:21:20.746103 1422 log.go:181] (0xc0001ce0b0) (0xc000c8e140) Stream removed, broadcasting: 1\nI0825 00:21:20.746192 1422 log.go:181] (0xc0001ce0b0) Go away received\nI0825 00:21:20.746397 1422 log.go:181] (0xc0001ce0b0) (0xc000c8e140) Stream removed, broadcasting: 1\nI0825 00:21:20.746411 1422 log.go:181] (0xc0001ce0b0) (0xc000c8e1e0) Stream removed, broadcasting: 3\nI0825 00:21:20.746418 1422 log.go:181] (0xc0001ce0b0) (0xc000a58000) Stream removed, broadcasting: 5\n" Aug 25 00:21:20.756: INFO: stdout: "" Aug 25 00:21:20.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4129 execpod-affinity6nh78 -- /bin/sh -x -c nc -zv -t -w 2 10.99.56.201 80' Aug 25 00:21:21.015: INFO: stderr: "I0825 00:21:20.922297 1440 log.go:181] (0xc0000fa000) (0xc0004a6000) Create stream\nI0825 00:21:20.922360 1440 log.go:181] (0xc0000fa000) (0xc0004a6000) Stream added, broadcasting: 1\nI0825 00:21:20.924917 1440 log.go:181] (0xc0000fa000) Reply frame received for 1\nI0825 00:21:20.924955 1440 log.go:181] (0xc0000fa000) (0xc0004a60a0) Create stream\nI0825 00:21:20.924965 1440 log.go:181] (0xc0000fa000) (0xc0004a60a0) Stream added, broadcasting: 3\nI0825 00:21:20.925858 1440 log.go:181] (0xc0000fa000) Reply frame received for 3\nI0825 00:21:20.925901 1440 log.go:181] (0xc0000fa000) (0xc000dc4000) Create stream\nI0825 00:21:20.925916 1440 log.go:181] (0xc0000fa000) (0xc000dc4000) Stream added, broadcasting: 5\nI0825 00:21:20.926874 1440 log.go:181] (0xc0000fa000) Reply frame received for 5\nI0825 00:21:21.007065 1440 log.go:181] (0xc0000fa000) Data frame received for 3\nI0825 00:21:21.007105 1440 log.go:181] (0xc0004a60a0) (3) Data frame handling\nI0825 00:21:21.007131 1440 log.go:181] (0xc0000fa000) Data frame received for 5\nI0825 00:21:21.007142 1440 log.go:181] (0xc000dc4000) (5) Data frame handling\nI0825 00:21:21.007153 1440 log.go:181] (0xc000dc4000) (5) Data frame sent\nI0825 00:21:21.007161 1440 log.go:181] (0xc0000fa000) Data frame received for 5\nI0825 00:21:21.007167 1440 log.go:181] (0xc000dc4000) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.56.201 80\nConnection to 10.99.56.201 80 port [tcp/http] succeeded!\nI0825 00:21:21.008185 1440 log.go:181] (0xc0000fa000) Data frame received for 1\nI0825 00:21:21.008251 1440 log.go:181] (0xc0004a6000) (1) Data frame handling\nI0825 00:21:21.008311 1440 log.go:181] (0xc0004a6000) (1) Data frame sent\nI0825 00:21:21.008369 1440 log.go:181] (0xc0000fa000) (0xc0004a6000) Stream removed, broadcasting: 1\nI0825 00:21:21.008393 1440 log.go:181] (0xc0000fa000) Go away received\nI0825 00:21:21.008673 1440 log.go:181] (0xc0000fa000) (0xc0004a6000) Stream removed, broadcasting: 1\nI0825 00:21:21.008686 1440 log.go:181] (0xc0000fa000) (0xc0004a60a0) Stream removed, broadcasting: 3\nI0825 00:21:21.008695 1440 log.go:181] (0xc0000fa000) (0xc000dc4000) Stream removed, broadcasting: 5\n" Aug 25 00:21:21.015: INFO: stdout: "" Aug 25 00:21:21.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-4129 execpod-affinity6nh78 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.99.56.201:80/ ; done' Aug 25 00:21:21.316: INFO: stderr: "I0825 00:21:21.144939 1458 log.go:181] (0xc0005c8d10) (0xc0004d2c80) Create stream\nI0825 00:21:21.145023 1458 log.go:181] (0xc0005c8d10) (0xc0004d2c80) Stream added, broadcasting: 1\nI0825 00:21:21.149807 1458 log.go:181] (0xc0005c8d10) Reply frame received for 1\nI0825 00:21:21.149839 1458 log.go:181] (0xc0005c8d10) (0xc000b30000) Create stream\nI0825 00:21:21.149848 1458 log.go:181] (0xc0005c8d10) (0xc000b30000) Stream added, broadcasting: 3\nI0825 00:21:21.150628 1458 log.go:181] (0xc0005c8d10) Reply frame received for 3\nI0825 00:21:21.150673 1458 log.go:181] (0xc0005c8d10) (0xc000376a00) Create stream\nI0825 00:21:21.150689 1458 log.go:181] (0xc0005c8d10) (0xc000376a00) Stream added, broadcasting: 5\nI0825 00:21:21.151377 1458 log.go:181] (0xc0005c8d10) Reply frame received for 5\nI0825 00:21:21.217654 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.217706 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.217716 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.217735 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.217740 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.217746 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.221042 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.221062 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.221076 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.221351 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.221373 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.221389 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.221429 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.221448 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.221466 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.228098 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.228116 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.228133 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.228801 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.228857 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.228879 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.228898 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.228921 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.228940 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.233896 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.233916 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.233932 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.234470 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.234487 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.234502 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.234540 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.234560 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.234575 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.238097 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.238116 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.238130 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.238928 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.238949 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.238974 1458 log.go:181] (0xc000376a00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.238987 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.239002 1458 log.go:181] (0xc000376a00) (5) Data frame sent\nI0825 00:21:21.239015 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.243107 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.243118 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.243125 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.243703 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.243720 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.243737 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.245119 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.245132 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.245138 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.247132 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.247158 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.247175 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.247811 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.247829 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.247841 1458 log.go:181] (0xc000376a00) (5) Data frame sent\nI0825 00:21:21.247850 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.247859 1458 log.go:181] (0xc000376a00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.247880 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.247922 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.247940 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.247959 1458 log.go:181] (0xc000376a00) (5) Data frame sent\nI0825 00:21:21.251479 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.251494 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.251500 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.251954 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.251974 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.251985 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.252011 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.252035 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.252051 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.255632 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.255651 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.255668 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.256190 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.256213 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.256228 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.256239 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.256250 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.256257 1458 log.go:181] (0xc000376a00) (5) Data frame sent\nI0825 00:21:21.256263 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.256267 1458 log.go:181] (0xc000376a00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.256282 1458 log.go:181] (0xc000376a00) (5) Data frame sent\nI0825 00:21:21.263036 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.263049 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.263056 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.263542 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.263554 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.263564 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0825 00:21:21.263657 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.263674 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.263685 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n http://10.99.56.201:80/\nI0825 00:21:21.263948 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.263970 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.263986 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.267596 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.267612 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.267624 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.268567 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.268586 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.268593 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.268604 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.268612 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.268624 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.275384 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.275412 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.275431 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.275980 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.275994 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.276000 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -sI0825 00:21:21.276099 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.276128 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.276150 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.276201 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.276237 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.276264 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.281596 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.281615 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.281640 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.282454 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.282477 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.282484 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.282492 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.282497 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.282501 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.287866 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.287900 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.287926 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.288458 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.288470 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.288475 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.288494 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.288519 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.288544 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.292343 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.292367 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.292399 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.293024 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.293058 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.293081 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.293104 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.293118 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.293135 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.298749 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.298776 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.298786 1458 log.go:181] (0xc000376a00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.56.201:80/\nI0825 00:21:21.298828 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.298854 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.298879 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.304312 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.304331 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.304346 1458 log.go:181] (0xc000b30000) (3) Data frame sent\nI0825 00:21:21.306028 1458 log.go:181] (0xc0005c8d10) Data frame received for 5\nI0825 00:21:21.306063 1458 log.go:181] (0xc000376a00) (5) Data frame handling\nI0825 00:21:21.306159 1458 log.go:181] (0xc0005c8d10) Data frame received for 3\nI0825 00:21:21.306172 1458 log.go:181] (0xc000b30000) (3) Data frame handling\nI0825 00:21:21.307540 1458 log.go:181] (0xc0005c8d10) Data frame received for 1\nI0825 00:21:21.307560 1458 log.go:181] (0xc0004d2c80) (1) Data frame handling\nI0825 00:21:21.307585 1458 log.go:181] (0xc0004d2c80) (1) Data frame sent\nI0825 00:21:21.307601 1458 log.go:181] (0xc0005c8d10) (0xc0004d2c80) Stream removed, broadcasting: 1\nI0825 00:21:21.308046 1458 log.go:181] (0xc0005c8d10) (0xc0004d2c80) Stream removed, broadcasting: 1\nI0825 00:21:21.308067 1458 log.go:181] (0xc0005c8d10) (0xc000b30000) Stream removed, broadcasting: 3\nI0825 00:21:21.308077 1458 log.go:181] (0xc0005c8d10) (0xc000376a00) Stream removed, broadcasting: 5\n" Aug 25 00:21:21.317: INFO: stdout: "\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm\naffinity-clusterip-xgtgm" Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Received response from host: affinity-clusterip-xgtgm Aug 25 00:21:21.317: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4129, will wait for the garbage collector to delete the pods Aug 25 00:21:21.459: INFO: Deleting ReplicationController affinity-clusterip took: 5.893259ms Aug 25 00:21:22.060: INFO: Terminating ReplicationController affinity-clusterip pods took: 600.20788ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:21:40.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4129" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:64.387 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":177,"skipped":3004,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:21:40.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 25 00:21:40.270: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Aug 25 00:21:40.273: INFO: starting watch STEP: patching STEP: updating Aug 25 00:21:40.282: INFO: waiting for watch events with expected annotations Aug 25 00:21:40.282: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:21:40.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-215" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":178,"skipped":3063,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:21:40.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:21:57.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8226" for this suite. • [SLOW TEST:17.321 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":179,"skipped":3070,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:21:57.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 25 00:21:57.765: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 25 00:21:57.781: INFO: Waiting for terminating namespaces to be deleted... Aug 25 00:21:57.783: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 25 00:21:57.787: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 00:21:57.787: INFO: Container app ready: true, restart count 0 Aug 25 00:21:57.787: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:21:57.787: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:21:57.787: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:21:57.787: INFO: Container kube-proxy ready: true, restart count 0 Aug 25 00:21:57.787: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 25 00:21:57.790: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 00:21:57.790: INFO: Container app ready: true, restart count 0 Aug 25 00:21:57.790: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:21:57.790: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:21:57.790: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 25 00:21:57.790: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b7c6ebad-4ccb-444b-9782-d5813d8cd928 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-b7c6ebad-4ccb-444b-9782-d5813d8cd928 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b7c6ebad-4ccb-444b-9782-d5813d8cd928 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:27:14.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9051" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:316.523 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":180,"skipped":3145,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:27:14.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:27:18.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6121" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":181,"skipped":3186,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:27:18.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3127, will wait for the garbage collector to delete the pods Aug 25 00:27:28.608: INFO: Deleting Job.batch foo took: 7.12294ms Aug 25 00:27:29.308: INFO: Terminating Job.batch foo pods took: 700.231164ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:28:09.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3127" for this suite. • [SLOW TEST:51.323 seconds] [sig-apps] Job /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":182,"skipped":3203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:28:09.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-930e101f-33ed-401b-b648-1567e1d0f594 STEP: Creating a pod to test consume secrets Aug 25 00:28:09.823: INFO: Waiting up to 5m0s for pod "pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74" in namespace "secrets-886" to be "Succeeded or Failed" Aug 25 00:28:09.997: INFO: Pod "pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74": Phase="Pending", Reason="", readiness=false. Elapsed: 174.389909ms Aug 25 00:28:12.001: INFO: Pod "pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177745964s Aug 25 00:28:14.004: INFO: Pod "pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181311838s STEP: Saw pod success Aug 25 00:28:14.004: INFO: Pod "pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74" satisfied condition "Succeeded or Failed" Aug 25 00:28:14.007: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74 container secret-volume-test: STEP: delete the pod Aug 25 00:28:14.058: INFO: Waiting for pod pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74 to disappear Aug 25 00:28:14.072: INFO: Pod pod-secrets-0de70053-f775-4cce-8296-a6644fe19d74 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:28:14.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-886" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":183,"skipped":3215,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:28:14.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:28:14.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb" in namespace "projected-8782" to be "Succeeded or Failed" Aug 25 00:28:14.333: INFO: Pod "downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 142.484909ms Aug 25 00:28:16.378: INFO: Pod "downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188335641s Aug 25 00:28:18.382: INFO: Pod "downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.191774892s STEP: Saw pod success Aug 25 00:28:18.382: INFO: Pod "downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb" satisfied condition "Succeeded or Failed" Aug 25 00:28:18.384: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb container client-container: STEP: delete the pod Aug 25 00:28:18.429: INFO: Waiting for pod downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb to disappear Aug 25 00:28:18.438: INFO: Pod downwardapi-volume-4de457d9-1767-4d6b-a000-a08e22da5bbb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:28:18.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8782" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":184,"skipped":3218,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:28:18.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7291 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-7291 Aug 25 00:28:18.537: INFO: Found 0 stateful pods, waiting for 1 Aug 25 00:28:28.542: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 25 00:28:28.565: INFO: Deleting all statefulset in ns statefulset-7291 Aug 25 00:28:28.617: INFO: Scaling statefulset ss to 0 Aug 25 00:28:48.738: INFO: Waiting for statefulset status.replicas updated to 0 Aug 25 00:28:48.741: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:28:48.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7291" for this suite. • [SLOW TEST:30.419 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":185,"skipped":3242,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:28:48.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:28:50.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6088" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":186,"skipped":3247,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:28:50.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 25 00:28:51.262: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:29:10.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5834" for this suite. • [SLOW TEST:20.552 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":187,"skipped":3248,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:29:10.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 25 00:29:18.016: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d0bf8604-cbb7-4cc0-b154-121e3fa19ba7" Aug 25 00:29:18.016: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d0bf8604-cbb7-4cc0-b154-121e3fa19ba7" in namespace "pods-7217" to be "terminated due to deadline exceeded" Aug 25 00:29:18.351: INFO: Pod "pod-update-activedeadlineseconds-d0bf8604-cbb7-4cc0-b154-121e3fa19ba7": Phase="Running", Reason="", readiness=true. Elapsed: 335.256125ms Aug 25 00:29:20.411: INFO: Pod "pod-update-activedeadlineseconds-d0bf8604-cbb7-4cc0-b154-121e3fa19ba7": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.395211838s Aug 25 00:29:20.411: INFO: Pod "pod-update-activedeadlineseconds-d0bf8604-cbb7-4cc0-b154-121e3fa19ba7" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:29:20.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7217" for this suite. • [SLOW TEST:9.710 seconds] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":3268,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:29:20.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:29:21.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3437" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":189,"skipped":3278,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:29:21.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-730b93bd-5110-43dc-b31c-9c6944d5833d STEP: Creating a pod to test consume configMaps Aug 25 00:29:22.745: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432" in namespace "projected-18" to be "Succeeded or Failed" Aug 25 00:29:23.058: INFO: Pod "pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432": Phase="Pending", Reason="", readiness=false. Elapsed: 312.94037ms Aug 25 00:29:25.232: INFO: Pod "pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486411835s Aug 25 00:29:27.484: INFO: Pod "pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.739018614s Aug 25 00:29:29.634: INFO: Pod "pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432": Phase="Running", Reason="", readiness=true. Elapsed: 6.889292303s Aug 25 00:29:31.698: INFO: Pod "pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.952999791s STEP: Saw pod success Aug 25 00:29:31.698: INFO: Pod "pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432" satisfied condition "Succeeded or Failed" Aug 25 00:29:31.700: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432 container projected-configmap-volume-test: STEP: delete the pod Aug 25 00:29:32.543: INFO: Waiting for pod pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432 to disappear Aug 25 00:29:33.004: INFO: Pod pod-projected-configmaps-67a133d7-1b9d-4c18-afae-eef786a6f432 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:29:33.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-18" for this suite. • [SLOW TEST:11.808 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":3297,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:29:33.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:29:50.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3637" for this suite. • [SLOW TEST:17.554 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":191,"skipped":3305,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:29:50.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:29:51.055: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 25 00:29:54.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-733 create -f -' Aug 25 00:29:58.331: INFO: stderr: "" Aug 25 00:29:58.331: INFO: stdout: "e2e-test-crd-publish-openapi-5854-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 25 00:29:58.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-733 delete e2e-test-crd-publish-openapi-5854-crds test-cr' Aug 25 00:29:58.451: INFO: stderr: "" Aug 25 00:29:58.451: INFO: stdout: "e2e-test-crd-publish-openapi-5854-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 25 00:29:58.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-733 apply -f -' Aug 25 00:29:58.768: INFO: stderr: "" Aug 25 00:29:58.768: INFO: stdout: "e2e-test-crd-publish-openapi-5854-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 25 00:29:58.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-733 delete e2e-test-crd-publish-openapi-5854-crds test-cr' Aug 25 00:29:58.886: INFO: stderr: "" Aug 25 00:29:58.886: INFO: stdout: "e2e-test-crd-publish-openapi-5854-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 25 00:29:58.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5854-crds' Aug 25 00:29:59.222: INFO: stderr: "" Aug 25 00:29:59.222: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5854-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:30:02.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-733" for this suite. • [SLOW TEST:11.441 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":192,"skipped":3316,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:30:02.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:30:02.310: INFO: Creating ReplicaSet my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf Aug 25 00:30:02.342: INFO: Pod name my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf: Found 0 pods out of 1 Aug 25 00:30:07.346: INFO: Pod name my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf: Found 1 pods out of 1 Aug 25 00:30:07.346: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf" is running Aug 25 00:30:07.348: INFO: Pod "my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf-6d9d7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:30:02 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:30:05 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:30:05 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-25 00:30:02 +0000 UTC Reason: Message:}]) Aug 25 00:30:07.349: INFO: Trying to dial the pod Aug 25 00:30:12.359: INFO: Controller my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf: Got expected result from replica 1 [my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf-6d9d7]: "my-hostname-basic-0796a1b7-8c6d-40af-a375-f5fe80f130bf-6d9d7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:30:12.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4431" for this suite. • [SLOW TEST:10.149 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":193,"skipped":3317,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:30:12.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 25 00:30:12.722: INFO: Waiting up to 5m0s for pod "downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d" in namespace "downward-api-6176" to be "Succeeded or Failed" Aug 25 00:30:12.738: INFO: Pod "downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.140716ms Aug 25 00:30:14.742: INFO: Pod "downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019882975s Aug 25 00:30:16.787: INFO: Pod "downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06455184s Aug 25 00:30:18.791: INFO: Pod "downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06883038s STEP: Saw pod success Aug 25 00:30:18.791: INFO: Pod "downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d" satisfied condition "Succeeded or Failed" Aug 25 00:30:18.794: INFO: Trying to get logs from node latest-worker2 pod downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d container dapi-container: STEP: delete the pod Aug 25 00:30:18.827: INFO: Waiting for pod downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d to disappear Aug 25 00:30:18.843: INFO: Pod downward-api-fb1a9ccb-ef22-4282-9564-c29b384d8e2d no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:30:18.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6176" for this suite. • [SLOW TEST:6.486 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":194,"skipped":3327,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:30:18.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:30:19.702: INFO: Checking APIGroup: apiregistration.k8s.io Aug 25 00:30:19.703: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Aug 25 00:30:19.703: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.703: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Aug 25 00:30:19.703: INFO: Checking APIGroup: extensions Aug 25 00:30:19.703: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Aug 25 00:30:19.703: INFO: Versions found [{extensions/v1beta1 v1beta1}] Aug 25 00:30:19.703: INFO: extensions/v1beta1 matches extensions/v1beta1 Aug 25 00:30:19.703: INFO: Checking APIGroup: apps Aug 25 00:30:19.704: INFO: PreferredVersion.GroupVersion: apps/v1 Aug 25 00:30:19.704: INFO: Versions found [{apps/v1 v1}] Aug 25 00:30:19.704: INFO: apps/v1 matches apps/v1 Aug 25 00:30:19.704: INFO: Checking APIGroup: events.k8s.io Aug 25 00:30:19.705: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Aug 25 00:30:19.705: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.705: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Aug 25 00:30:19.705: INFO: Checking APIGroup: authentication.k8s.io Aug 25 00:30:19.706: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Aug 25 00:30:19.706: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.706: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Aug 25 00:30:19.706: INFO: Checking APIGroup: authorization.k8s.io Aug 25 00:30:19.706: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Aug 25 00:30:19.706: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.706: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Aug 25 00:30:19.706: INFO: Checking APIGroup: autoscaling Aug 25 00:30:19.707: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Aug 25 00:30:19.707: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Aug 25 00:30:19.707: INFO: autoscaling/v1 matches autoscaling/v1 Aug 25 00:30:19.707: INFO: Checking APIGroup: batch Aug 25 00:30:19.708: INFO: PreferredVersion.GroupVersion: batch/v1 Aug 25 00:30:19.708: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Aug 25 00:30:19.708: INFO: batch/v1 matches batch/v1 Aug 25 00:30:19.708: INFO: Checking APIGroup: certificates.k8s.io Aug 25 00:30:19.709: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Aug 25 00:30:19.709: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.709: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Aug 25 00:30:19.709: INFO: Checking APIGroup: networking.k8s.io Aug 25 00:30:19.709: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Aug 25 00:30:19.709: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.709: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Aug 25 00:30:19.709: INFO: Checking APIGroup: policy Aug 25 00:30:19.710: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Aug 25 00:30:19.710: INFO: Versions found [{policy/v1beta1 v1beta1}] Aug 25 00:30:19.710: INFO: policy/v1beta1 matches policy/v1beta1 Aug 25 00:30:19.710: INFO: Checking APIGroup: rbac.authorization.k8s.io Aug 25 00:30:19.711: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Aug 25 00:30:19.711: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.711: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Aug 25 00:30:19.711: INFO: Checking APIGroup: storage.k8s.io Aug 25 00:30:19.712: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Aug 25 00:30:19.712: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.712: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Aug 25 00:30:19.712: INFO: Checking APIGroup: admissionregistration.k8s.io Aug 25 00:30:19.713: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Aug 25 00:30:19.713: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.713: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Aug 25 00:30:19.713: INFO: Checking APIGroup: apiextensions.k8s.io Aug 25 00:30:19.714: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Aug 25 00:30:19.714: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.714: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Aug 25 00:30:19.714: INFO: Checking APIGroup: scheduling.k8s.io Aug 25 00:30:19.715: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Aug 25 00:30:19.715: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.715: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Aug 25 00:30:19.715: INFO: Checking APIGroup: coordination.k8s.io Aug 25 00:30:19.716: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Aug 25 00:30:19.716: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.716: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Aug 25 00:30:19.716: INFO: Checking APIGroup: node.k8s.io Aug 25 00:30:19.717: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Aug 25 00:30:19.717: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.717: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Aug 25 00:30:19.717: INFO: Checking APIGroup: discovery.k8s.io Aug 25 00:30:19.718: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Aug 25 00:30:19.718: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Aug 25 00:30:19.718: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:30:19.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-1378" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":195,"skipped":3353,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:30:19.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:30:20.708: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:30:27.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6782" for this suite. • [SLOW TEST:8.182 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":196,"skipped":3354,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:30:27.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 25 00:30:28.496: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 25 00:30:30.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912228, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912228, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912228, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912228, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:30:33.570: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:30:33.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:30:34.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4501" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.941 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":197,"skipped":3386,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:30:34.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:30:35.005: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:30:39.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8820" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":198,"skipped":3394,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:30:39.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-c34a8484-cc4d-49c7-afa1-44030cf3056e STEP: Creating secret with name s-test-opt-upd-2caa6c42-9c79-4461-853c-5c2cd03baada STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c34a8484-cc4d-49c7-afa1-44030cf3056e STEP: Updating secret s-test-opt-upd-2caa6c42-9c79-4461-853c-5c2cd03baada STEP: Creating secret with name s-test-opt-create-ffc651c1-fd42-468a-a67d-c414c97359d1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:32:11.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2453" for this suite. • [SLOW TEST:92.077 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3405,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:32:11.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:32:12.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8531" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":200,"skipped":3407,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:32:12.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:32:12.843: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 25 00:32:14.888: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:32:15.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3411" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":201,"skipped":3410,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:32:15.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4522.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4522.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 25 00:32:29.485: INFO: DNS probes using dns-4522/dns-test-adcbce45-dea0-4f1f-affd-53e698710166 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:32:29.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4522" for this suite. • [SLOW TEST:14.733 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":202,"skipped":3413,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:32:30.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:32:32.286: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:32:34.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912352, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912352, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912353, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912352, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:32:36.318: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912352, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912352, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912353, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912352, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:32:40.115: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:32:40.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:32:41.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4506" for this suite. STEP: Destroying namespace "webhook-4506-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.225 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":203,"skipped":3438,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:32:41.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:32:42.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1699" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":204,"skipped":3461,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:32:42.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 25 00:32:42.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7525' Aug 25 00:32:42.510: INFO: stderr: "" Aug 25 00:32:42.510: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 25 00:32:42.510: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7525' Aug 25 00:32:42.703: INFO: stderr: "" Aug 25 00:32:42.703: INFO: stdout: "update-demo-nautilus-l55kw update-demo-nautilus-wzsx2 " Aug 25 00:32:42.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:32:42.799: INFO: stderr: "" Aug 25 00:32:42.799: INFO: stdout: "" Aug 25 00:32:42.799: INFO: update-demo-nautilus-l55kw is created but not running Aug 25 00:32:47.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7525' Aug 25 00:32:47.952: INFO: stderr: "" Aug 25 00:32:47.952: INFO: stdout: "update-demo-nautilus-l55kw update-demo-nautilus-wzsx2 " Aug 25 00:32:47.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:32:48.310: INFO: stderr: "" Aug 25 00:32:48.310: INFO: stdout: "" Aug 25 00:32:48.310: INFO: update-demo-nautilus-l55kw is created but not running Aug 25 00:32:53.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7525' Aug 25 00:32:53.425: INFO: stderr: "" Aug 25 00:32:53.425: INFO: stdout: "update-demo-nautilus-l55kw update-demo-nautilus-wzsx2 " Aug 25 00:32:53.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:32:53.537: INFO: stderr: "" Aug 25 00:32:53.537: INFO: stdout: "true" Aug 25 00:32:53.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:32:53.669: INFO: stderr: "" Aug 25 00:32:53.669: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 25 00:32:53.669: INFO: validating pod update-demo-nautilus-l55kw Aug 25 00:32:53.672: INFO: got data: { "image": "nautilus.jpg" } Aug 25 00:32:53.673: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 25 00:32:53.673: INFO: update-demo-nautilus-l55kw is verified up and running Aug 25 00:32:53.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzsx2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:32:53.773: INFO: stderr: "" Aug 25 00:32:53.773: INFO: stdout: "true" Aug 25 00:32:53.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzsx2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:32:53.871: INFO: stderr: "" Aug 25 00:32:53.871: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 25 00:32:53.871: INFO: validating pod update-demo-nautilus-wzsx2 Aug 25 00:32:53.874: INFO: got data: { "image": "nautilus.jpg" } Aug 25 00:32:53.874: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 25 00:32:53.874: INFO: update-demo-nautilus-wzsx2 is verified up and running STEP: scaling down the replication controller Aug 25 00:32:53.877: INFO: scanned /root for discovery docs: Aug 25 00:32:53.877: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7525' Aug 25 00:32:55.167: INFO: stderr: "" Aug 25 00:32:55.167: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 25 00:32:55.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7525' Aug 25 00:32:55.278: INFO: stderr: "" Aug 25 00:32:55.278: INFO: stdout: "update-demo-nautilus-l55kw update-demo-nautilus-wzsx2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 25 00:33:00.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7525' Aug 25 00:33:00.394: INFO: stderr: "" Aug 25 00:33:00.394: INFO: stdout: "update-demo-nautilus-l55kw " Aug 25 00:33:00.394: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:33:00.522: INFO: stderr: "" Aug 25 00:33:00.522: INFO: stdout: "true" Aug 25 00:33:00.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:33:00.616: INFO: stderr: "" Aug 25 00:33:00.616: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 25 00:33:00.616: INFO: validating pod update-demo-nautilus-l55kw Aug 25 00:33:00.619: INFO: got data: { "image": "nautilus.jpg" } Aug 25 00:33:00.619: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 25 00:33:00.619: INFO: update-demo-nautilus-l55kw is verified up and running STEP: scaling up the replication controller Aug 25 00:33:00.622: INFO: scanned /root for discovery docs: Aug 25 00:33:00.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7525' Aug 25 00:33:01.811: INFO: stderr: "" Aug 25 00:33:01.811: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 25 00:33:01.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7525' Aug 25 00:33:01.943: INFO: stderr: "" Aug 25 00:33:01.943: INFO: stdout: "update-demo-nautilus-5cwwc update-demo-nautilus-l55kw " Aug 25 00:33:01.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cwwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:33:02.056: INFO: stderr: "" Aug 25 00:33:02.056: INFO: stdout: "" Aug 25 00:33:02.056: INFO: update-demo-nautilus-5cwwc is created but not running Aug 25 00:33:07.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7525' Aug 25 00:33:07.177: INFO: stderr: "" Aug 25 00:33:07.177: INFO: stdout: "update-demo-nautilus-5cwwc update-demo-nautilus-l55kw " Aug 25 00:33:07.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cwwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:33:07.283: INFO: stderr: "" Aug 25 00:33:07.283: INFO: stdout: "true" Aug 25 00:33:07.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5cwwc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:33:07.385: INFO: stderr: "" Aug 25 00:33:07.385: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 25 00:33:07.385: INFO: validating pod update-demo-nautilus-5cwwc Aug 25 00:33:07.389: INFO: got data: { "image": "nautilus.jpg" } Aug 25 00:33:07.389: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 25 00:33:07.389: INFO: update-demo-nautilus-5cwwc is verified up and running Aug 25 00:33:07.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:33:07.496: INFO: stderr: "" Aug 25 00:33:07.496: INFO: stdout: "true" Aug 25 00:33:07.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l55kw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7525' Aug 25 00:33:07.596: INFO: stderr: "" Aug 25 00:33:07.596: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 25 00:33:07.596: INFO: validating pod update-demo-nautilus-l55kw Aug 25 00:33:07.599: INFO: got data: { "image": "nautilus.jpg" } Aug 25 00:33:07.599: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 25 00:33:07.599: INFO: update-demo-nautilus-l55kw is verified up and running STEP: using delete to clean up resources Aug 25 00:33:07.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7525' Aug 25 00:33:07.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:33:07.718: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 25 00:33:07.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7525' Aug 25 00:33:07.817: INFO: stderr: "No resources found in kubectl-7525 namespace.\n" Aug 25 00:33:07.817: INFO: stdout: "" Aug 25 00:33:07.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7525 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 25 00:33:08.212: INFO: stderr: "" Aug 25 00:33:08.212: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:33:08.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7525" for this suite. • [SLOW TEST:26.191 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":205,"skipped":3461,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:33:08.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-5ab63f44-6f57-4b0c-a9e1-f90c41c29314 STEP: Creating a pod to test consume secrets Aug 25 00:33:08.368: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528" in namespace "projected-8306" to be "Succeeded or Failed" Aug 25 00:33:08.372: INFO: Pod "pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528": Phase="Pending", Reason="", readiness=false. Elapsed: 3.180365ms Aug 25 00:33:10.474: INFO: Pod "pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106003379s Aug 25 00:33:12.479: INFO: Pod "pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110330342s STEP: Saw pod success Aug 25 00:33:12.479: INFO: Pod "pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528" satisfied condition "Succeeded or Failed" Aug 25 00:33:12.482: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528 container secret-volume-test: STEP: delete the pod Aug 25 00:33:12.608: INFO: Waiting for pod pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528 to disappear Aug 25 00:33:12.618: INFO: Pod pod-projected-secrets-59213e4e-312b-4876-a2ad-67efab3da528 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:33:12.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8306" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3510,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:33:12.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-defc6ff0-effd-493a-92fa-c8fdd3a8083c STEP: Creating a pod to test consume secrets Aug 25 00:33:12.763: INFO: Waiting up to 5m0s for pod "pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73" in namespace "secrets-359" to be "Succeeded or Failed" Aug 25 00:33:12.804: INFO: Pod "pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73": Phase="Pending", Reason="", readiness=false. Elapsed: 40.871236ms Aug 25 00:33:14.947: INFO: Pod "pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183798966s Aug 25 00:33:16.951: INFO: Pod "pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187349564s Aug 25 00:33:18.955: INFO: Pod "pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73": Phase="Running", Reason="", readiness=true. Elapsed: 6.191707694s Aug 25 00:33:21.512: INFO: Pod "pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.748030312s STEP: Saw pod success Aug 25 00:33:21.512: INFO: Pod "pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73" satisfied condition "Succeeded or Failed" Aug 25 00:33:21.515: INFO: Trying to get logs from node latest-worker pod pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73 container secret-volume-test: STEP: delete the pod Aug 25 00:33:22.363: INFO: Waiting for pod pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73 to disappear Aug 25 00:33:22.396: INFO: Pod pod-secrets-0fe2fbc1-f69c-47be-8823-c168b052ab73 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:33:22.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-359" for this suite. • [SLOW TEST:9.780 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":207,"skipped":3514,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:33:22.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Aug 25 00:33:23.537: INFO: Waiting up to 5m0s for pod "var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed" in namespace "var-expansion-8041" to be "Succeeded or Failed" Aug 25 00:33:24.124: INFO: Pod "var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 587.106828ms Aug 25 00:33:26.199: INFO: Pod "var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.662162693s Aug 25 00:33:28.894: INFO: Pod "var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed": Phase="Pending", Reason="", readiness=false. Elapsed: 5.357610641s Aug 25 00:33:30.899: INFO: Pod "var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.361956267s STEP: Saw pod success Aug 25 00:33:30.899: INFO: Pod "var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed" satisfied condition "Succeeded or Failed" Aug 25 00:33:30.901: INFO: Trying to get logs from node latest-worker pod var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed container dapi-container: STEP: delete the pod Aug 25 00:33:31.039: INFO: Waiting for pod var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed to disappear Aug 25 00:33:31.217: INFO: Pod var-expansion-1982b257-8bbf-4f2c-804f-a8423a1cd9ed no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:33:31.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8041" for this suite. • [SLOW TEST:8.982 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":208,"skipped":3519,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:33:31.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-9802 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9802 to expose endpoints map[] Aug 25 00:33:31.607: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Aug 25 00:33:32.642: INFO: successfully validated that service endpoint-test2 in namespace services-9802 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-9802 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9802 to expose endpoints map[pod1:[80]] Aug 25 00:33:36.751: INFO: successfully validated that service endpoint-test2 in namespace services-9802 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-9802 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9802 to expose endpoints map[pod1:[80] pod2:[80]] Aug 25 00:33:39.826: INFO: successfully validated that service endpoint-test2 in namespace services-9802 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-9802 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9802 to expose endpoints map[pod2:[80]] Aug 25 00:33:39.881: INFO: successfully validated that service endpoint-test2 in namespace services-9802 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-9802 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9802 to expose endpoints map[] Aug 25 00:33:41.080: INFO: successfully validated that service endpoint-test2 in namespace services-9802 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:33:41.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9802" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.991 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":209,"skipped":3529,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:33:41.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:33:52.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1488" for this suite. • [SLOW TEST:11.870 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":210,"skipped":3537,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:33:53.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 25 00:33:53.618: INFO: >>> kubeConfig: /root/.kube/config Aug 25 00:33:56.621: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:09.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3489" for this suite. • [SLOW TEST:16.683 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":211,"skipped":3541,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:09.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f1d7ec6b-f700-4015-a27f-934b61db1946 STEP: Creating a pod to test consume secrets Aug 25 00:34:10.150: INFO: Waiting up to 5m0s for pod "pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422" in namespace "secrets-5210" to be "Succeeded or Failed" Aug 25 00:34:10.245: INFO: Pod "pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422": Phase="Pending", Reason="", readiness=false. Elapsed: 94.956434ms Aug 25 00:34:12.286: INFO: Pod "pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135659996s Aug 25 00:34:14.379: INFO: Pod "pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228659513s STEP: Saw pod success Aug 25 00:34:14.379: INFO: Pod "pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422" satisfied condition "Succeeded or Failed" Aug 25 00:34:14.382: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422 container secret-env-test: STEP: delete the pod Aug 25 00:34:14.571: INFO: Waiting for pod pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422 to disappear Aug 25 00:34:14.638: INFO: Pod pod-secrets-2f3d31f5-8df2-41db-9ef4-fa0d1ac4c422 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:14.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5210" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":212,"skipped":3542,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:14.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:14.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6466" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":213,"skipped":3543,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:14.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-3f6c5914-a7ab-4a55-897b-6df2126b482a STEP: Creating a pod to test consume configMaps Aug 25 00:34:15.172: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3" in namespace "configmap-8095" to be "Succeeded or Failed" Aug 25 00:34:15.361: INFO: Pod "pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 189.489311ms Aug 25 00:34:17.365: INFO: Pod "pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193005432s Aug 25 00:34:19.369: INFO: Pod "pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3": Phase="Running", Reason="", readiness=true. Elapsed: 4.197678101s Aug 25 00:34:21.502: INFO: Pod "pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.330273271s STEP: Saw pod success Aug 25 00:34:21.502: INFO: Pod "pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3" satisfied condition "Succeeded or Failed" Aug 25 00:34:21.517: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3 container configmap-volume-test: STEP: delete the pod Aug 25 00:34:21.604: INFO: Waiting for pod pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3 to disappear Aug 25 00:34:21.612: INFO: Pod pod-configmaps-7f3859b7-ff76-4e6c-98b3-c4a1d787e8b3 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:21.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8095" for this suite. • [SLOW TEST:6.792 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":214,"skipped":3545,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:21.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 25 00:34:21.820: INFO: Waiting up to 5m0s for pod "pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791" in namespace "emptydir-8558" to be "Succeeded or Failed" Aug 25 00:34:21.828: INFO: Pod "pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791": Phase="Pending", Reason="", readiness=false. Elapsed: 8.739251ms Aug 25 00:34:23.919: INFO: Pod "pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099231842s Aug 25 00:34:25.923: INFO: Pod "pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103589734s STEP: Saw pod success Aug 25 00:34:25.923: INFO: Pod "pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791" satisfied condition "Succeeded or Failed" Aug 25 00:34:25.926: INFO: Trying to get logs from node latest-worker pod pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791 container test-container: STEP: delete the pod Aug 25 00:34:26.346: INFO: Waiting for pod pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791 to disappear Aug 25 00:34:26.354: INFO: Pod pod-c81ae377-536e-4fa0-8ecb-c6e0cc0ca791 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:26.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8558" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":215,"skipped":3556,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:26.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-c4ea57bd-1f76-47c2-aa74-090a024df99d STEP: Creating a pod to test consume secrets Aug 25 00:34:26.512: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69" in namespace "projected-3992" to be "Succeeded or Failed" Aug 25 00:34:26.519: INFO: Pod "pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69": Phase="Pending", Reason="", readiness=false. Elapsed: 6.869395ms Aug 25 00:34:28.562: INFO: Pod "pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050538433s Aug 25 00:34:30.566: INFO: Pod "pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69": Phase="Running", Reason="", readiness=true. Elapsed: 4.054409944s Aug 25 00:34:32.570: INFO: Pod "pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058004678s STEP: Saw pod success Aug 25 00:34:32.570: INFO: Pod "pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69" satisfied condition "Succeeded or Failed" Aug 25 00:34:32.573: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69 container projected-secret-volume-test: STEP: delete the pod Aug 25 00:34:32.640: INFO: Waiting for pod pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69 to disappear Aug 25 00:34:32.687: INFO: Pod pod-projected-secrets-b00e0b71-59bb-4281-b20c-b030fe675c69 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:32.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3992" for this suite. • [SLOW TEST:6.320 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3592,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:32.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:34:33.722: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:34:35.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:34:37.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733912473, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:34:41.009: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:41.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9575" for this suite. STEP: Destroying namespace "webhook-9575-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.602 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":217,"skipped":3651,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:41.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:34:41.538: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 25 00:34:46.663: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 25 00:34:48.716: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 25 00:34:49.309: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6864 /apis/apps/v1/namespaces/deployment-6864/deployments/test-cleanup-deployment 7b9a5911-09a6-4f93-b8e3-70520bb282fa 3433990 1 2020-08-25 00:34:48 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-08-25 00:34:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004573628 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Aug 25 00:34:49.318: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-6864 /apis/apps/v1/namespaces/deployment-6864/replicasets/test-cleanup-deployment-5d446bdd47 90c2d329-e984-4972-88c5-b8754ab937b7 3433994 1 2020-08-25 00:34:49 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7b9a5911-09a6-4f93-b8e3-70520bb282fa 0xc004573db7 0xc004573db8}] [] [{kube-controller-manager Update apps/v1 2020-08-25 00:34:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a5911-09a6-4f93-b8e3-70520bb282fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004573e98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 25 00:34:49.318: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 25 00:34:49.318: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6864 /apis/apps/v1/namespaces/deployment-6864/replicasets/test-cleanup-controller 19ab2f32-457e-49ed-8158-142262cb503a 3433993 1 2020-08-25 00:34:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 7b9a5911-09a6-4f93-b8e3-70520bb282fa 0xc004573c07 0xc004573c08}] [] [{e2e.test Update apps/v1 2020-08-25 00:34:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-25 00:34:49 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"7b9a5911-09a6-4f93-b8e3-70520bb282fa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004573d38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 25 00:34:49.501: INFO: Pod "test-cleanup-controller-hl66h" is available: &Pod{ObjectMeta:{test-cleanup-controller-hl66h test-cleanup-controller- deployment-6864 /api/v1/namespaces/deployment-6864/pods/test-cleanup-controller-hl66h f375dba1-34d9-4da7-b041-0b87cfc815d1 3433980 0 2020-08-25 00:34:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 19ab2f32-457e-49ed-8158-142262cb503a 0xc00461a6c7 0xc00461a6c8}] [] [{kube-controller-manager Update v1 2020-08-25 00:34:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19ab2f32-457e-49ed-8158-142262cb503a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-25 00:34:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.148\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q6xdb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q6xdb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q6xdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-25 00:34:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-25 00:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-25 00:34:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-25 00:34:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.11,PodIP:10.244.2.148,StartTime:2020-08-25 00:34:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-25 00:34:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c1c174d5bb1b810e76e21fb9bad3322b84c28fc385a898dde723749fe9544ef1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.148,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 25 00:34:49.502: INFO: Pod "test-cleanup-deployment-5d446bdd47-k8bgc" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-k8bgc test-cleanup-deployment-5d446bdd47- deployment-6864 /api/v1/namespaces/deployment-6864/pods/test-cleanup-deployment-5d446bdd47-k8bgc f48e5586-9b6c-463b-a37e-378f86cddcaa 3434000 0 2020-08-25 00:34:49 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 90c2d329-e984-4972-88c5-b8754ab937b7 0xc00461a957 0xc00461a958}] [] [{kube-controller-manager Update v1 2020-08-25 00:34:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90c2d329-e984-4972-88c5-b8754ab937b7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q6xdb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q6xdb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q6xdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-25 00:34:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:34:49.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6864" for this suite. • [SLOW TEST:8.365 seconds] [sig-apps] Deployment /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":218,"skipped":3665,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:34:49.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2116 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-2116 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2116 Aug 25 00:34:50.418: INFO: Found 0 stateful pods, waiting for 1 Aug 25 00:35:00.422: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 25 00:35:00.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 25 00:35:00.722: INFO: stderr: "I0825 00:35:00.576888 2033 log.go:181] (0xc0000eb080) (0xc0004cee60) Create stream\nI0825 00:35:00.576947 2033 log.go:181] (0xc0000eb080) (0xc0004cee60) Stream added, broadcasting: 1\nI0825 00:35:00.582264 2033 log.go:181] (0xc0000eb080) Reply frame received for 1\nI0825 00:35:00.582298 2033 log.go:181] (0xc0000eb080) (0xc0004cf900) Create stream\nI0825 00:35:00.582318 2033 log.go:181] (0xc0000eb080) (0xc0004cf900) Stream added, broadcasting: 3\nI0825 00:35:00.583071 2033 log.go:181] (0xc0000eb080) Reply frame received for 3\nI0825 00:35:00.583089 2033 log.go:181] (0xc0000eb080) (0xc000d2a000) Create stream\nI0825 00:35:00.583096 2033 log.go:181] (0xc0000eb080) (0xc000d2a000) Stream added, broadcasting: 5\nI0825 00:35:00.583802 2033 log.go:181] (0xc0000eb080) Reply frame received for 5\nI0825 00:35:00.678846 2033 log.go:181] (0xc0000eb080) Data frame received for 5\nI0825 00:35:00.678887 2033 log.go:181] (0xc000d2a000) (5) Data frame handling\nI0825 00:35:00.678913 2033 log.go:181] (0xc000d2a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0825 00:35:00.712548 2033 log.go:181] (0xc0000eb080) Data frame received for 3\nI0825 00:35:00.712590 2033 log.go:181] (0xc0004cf900) (3) Data frame handling\nI0825 00:35:00.712621 2033 log.go:181] (0xc0004cf900) (3) Data frame sent\nI0825 00:35:00.712639 2033 log.go:181] (0xc0000eb080) Data frame received for 3\nI0825 00:35:00.712654 2033 log.go:181] (0xc0004cf900) (3) Data frame handling\nI0825 00:35:00.712956 2033 log.go:181] (0xc0000eb080) Data frame received for 5\nI0825 00:35:00.712983 2033 log.go:181] (0xc000d2a000) (5) Data frame handling\nI0825 00:35:00.714895 2033 log.go:181] (0xc0000eb080) Data frame received for 1\nI0825 00:35:00.714907 2033 log.go:181] (0xc0004cee60) (1) Data frame handling\nI0825 00:35:00.714916 2033 log.go:181] (0xc0004cee60) (1) Data frame sent\nI0825 00:35:00.715064 2033 log.go:181] (0xc0000eb080) (0xc0004cee60) Stream removed, broadcasting: 1\nI0825 00:35:00.715147 2033 log.go:181] (0xc0000eb080) Go away received\nI0825 00:35:00.715546 2033 log.go:181] (0xc0000eb080) (0xc0004cee60) Stream removed, broadcasting: 1\nI0825 00:35:00.715564 2033 log.go:181] (0xc0000eb080) (0xc0004cf900) Stream removed, broadcasting: 3\nI0825 00:35:00.715574 2033 log.go:181] (0xc0000eb080) (0xc000d2a000) Stream removed, broadcasting: 5\n" Aug 25 00:35:00.722: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 25 00:35:00.722: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 25 00:35:00.726: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 25 00:35:10.730: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 25 00:35:10.730: INFO: Waiting for statefulset status.replicas updated to 0 Aug 25 00:35:10.856: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:10.856: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:01 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:10.856: INFO: Aug 25 00:35:10.856: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 25 00:35:11.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.883918188s Aug 25 00:35:12.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.87877051s Aug 25 00:35:13.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.771965s Aug 25 00:35:15.034: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.766853197s Aug 25 00:35:16.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.70538691s Aug 25 00:35:17.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.656462381s Aug 25 00:35:18.111: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.634218956s Aug 25 00:35:19.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.628802808s Aug 25 00:35:20.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 620.465276ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2116 Aug 25 00:35:21.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:35:21.473: INFO: stderr: "I0825 00:35:21.382297 2051 log.go:181] (0xc000954fd0) (0xc00030f040) Create stream\nI0825 00:35:21.382345 2051 log.go:181] (0xc000954fd0) (0xc00030f040) Stream added, broadcasting: 1\nI0825 00:35:21.386529 2051 log.go:181] (0xc000954fd0) Reply frame received for 1\nI0825 00:35:21.386585 2051 log.go:181] (0xc000954fd0) (0xc000d08000) Create stream\nI0825 00:35:21.386601 2051 log.go:181] (0xc000954fd0) (0xc000d08000) Stream added, broadcasting: 3\nI0825 00:35:21.387566 2051 log.go:181] (0xc000954fd0) Reply frame received for 3\nI0825 00:35:21.387603 2051 log.go:181] (0xc000954fd0) (0xc000d08140) Create stream\nI0825 00:35:21.387614 2051 log.go:181] (0xc000954fd0) (0xc000d08140) Stream added, broadcasting: 5\nI0825 00:35:21.388640 2051 log.go:181] (0xc000954fd0) Reply frame received for 5\nI0825 00:35:21.463538 2051 log.go:181] (0xc000954fd0) Data frame received for 5\nI0825 00:35:21.463559 2051 log.go:181] (0xc000d08140) (5) Data frame handling\nI0825 00:35:21.463572 2051 log.go:181] (0xc000d08140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0825 00:35:21.463788 2051 log.go:181] (0xc000954fd0) Data frame received for 3\nI0825 00:35:21.463813 2051 log.go:181] (0xc000d08000) (3) Data frame handling\nI0825 00:35:21.463829 2051 log.go:181] (0xc000d08000) (3) Data frame sent\nI0825 00:35:21.464089 2051 log.go:181] (0xc000954fd0) Data frame received for 3\nI0825 00:35:21.464100 2051 log.go:181] (0xc000d08000) (3) Data frame handling\nI0825 00:35:21.464493 2051 log.go:181] (0xc000954fd0) Data frame received for 5\nI0825 00:35:21.464520 2051 log.go:181] (0xc000d08140) (5) Data frame handling\nI0825 00:35:21.466100 2051 log.go:181] (0xc000954fd0) Data frame received for 1\nI0825 00:35:21.466127 2051 log.go:181] (0xc00030f040) (1) Data frame handling\nI0825 00:35:21.466145 2051 log.go:181] (0xc00030f040) (1) Data frame sent\nI0825 00:35:21.466164 2051 log.go:181] (0xc000954fd0) (0xc00030f040) Stream removed, broadcasting: 1\nI0825 00:35:21.466188 2051 log.go:181] (0xc000954fd0) Go away received\nI0825 00:35:21.466763 2051 log.go:181] (0xc000954fd0) (0xc00030f040) Stream removed, broadcasting: 1\nI0825 00:35:21.466786 2051 log.go:181] (0xc000954fd0) (0xc000d08000) Stream removed, broadcasting: 3\nI0825 00:35:21.466798 2051 log.go:181] (0xc000954fd0) (0xc000d08140) Stream removed, broadcasting: 5\n" Aug 25 00:35:21.473: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 25 00:35:21.473: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 25 00:35:21.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:35:21.970: INFO: stderr: "I0825 00:35:21.891335 2069 log.go:181] (0xc00069ac60) (0xc000196280) Create stream\nI0825 00:35:21.891430 2069 log.go:181] (0xc00069ac60) (0xc000196280) Stream added, broadcasting: 1\nI0825 00:35:21.901475 2069 log.go:181] (0xc00069ac60) Reply frame received for 1\nI0825 00:35:21.901521 2069 log.go:181] (0xc00069ac60) (0xc000554000) Create stream\nI0825 00:35:21.901532 2069 log.go:181] (0xc00069ac60) (0xc000554000) Stream added, broadcasting: 3\nI0825 00:35:21.903499 2069 log.go:181] (0xc00069ac60) Reply frame received for 3\nI0825 00:35:21.903523 2069 log.go:181] (0xc00069ac60) (0xc000b0e0a0) Create stream\nI0825 00:35:21.903531 2069 log.go:181] (0xc00069ac60) (0xc000b0e0a0) Stream added, broadcasting: 5\nI0825 00:35:21.904148 2069 log.go:181] (0xc00069ac60) Reply frame received for 5\nI0825 00:35:21.963518 2069 log.go:181] (0xc00069ac60) Data frame received for 3\nI0825 00:35:21.963543 2069 log.go:181] (0xc000554000) (3) Data frame handling\nI0825 00:35:21.963559 2069 log.go:181] (0xc000554000) (3) Data frame sent\nI0825 00:35:21.963568 2069 log.go:181] (0xc00069ac60) Data frame received for 3\nI0825 00:35:21.963575 2069 log.go:181] (0xc000554000) (3) Data frame handling\nI0825 00:35:21.963834 2069 log.go:181] (0xc00069ac60) Data frame received for 5\nI0825 00:35:21.963866 2069 log.go:181] (0xc000b0e0a0) (5) Data frame handling\nI0825 00:35:21.963883 2069 log.go:181] (0xc000b0e0a0) (5) Data frame sent\nI0825 00:35:21.963890 2069 log.go:181] (0xc00069ac60) Data frame received for 5\nI0825 00:35:21.963896 2069 log.go:181] (0xc000b0e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0825 00:35:21.965434 2069 log.go:181] (0xc00069ac60) Data frame received for 1\nI0825 00:35:21.965453 2069 log.go:181] (0xc000196280) (1) Data frame handling\nI0825 00:35:21.965478 2069 log.go:181] (0xc000196280) (1) Data frame sent\nI0825 00:35:21.965488 2069 log.go:181] (0xc00069ac60) (0xc000196280) Stream removed, broadcasting: 1\nI0825 00:35:21.965566 2069 log.go:181] (0xc00069ac60) Go away received\nI0825 00:35:21.965824 2069 log.go:181] (0xc00069ac60) (0xc000196280) Stream removed, broadcasting: 1\nI0825 00:35:21.965840 2069 log.go:181] (0xc00069ac60) (0xc000554000) Stream removed, broadcasting: 3\nI0825 00:35:21.965848 2069 log.go:181] (0xc00069ac60) (0xc000b0e0a0) Stream removed, broadcasting: 5\n" Aug 25 00:35:21.970: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 25 00:35:21.970: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 25 00:35:21.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:35:22.165: INFO: stderr: "I0825 00:35:22.101140 2087 log.go:181] (0xc0002e3ef0) (0xc000130a00) Create stream\nI0825 00:35:22.101220 2087 log.go:181] (0xc0002e3ef0) (0xc000130a00) Stream added, broadcasting: 1\nI0825 00:35:22.106613 2087 log.go:181] (0xc0002e3ef0) Reply frame received for 1\nI0825 00:35:22.106654 2087 log.go:181] (0xc0002e3ef0) (0xc000130000) Create stream\nI0825 00:35:22.106663 2087 log.go:181] (0xc0002e3ef0) (0xc000130000) Stream added, broadcasting: 3\nI0825 00:35:22.107505 2087 log.go:181] (0xc0002e3ef0) Reply frame received for 3\nI0825 00:35:22.107537 2087 log.go:181] (0xc0002e3ef0) (0xc0001300a0) Create stream\nI0825 00:35:22.107545 2087 log.go:181] (0xc0002e3ef0) (0xc0001300a0) Stream added, broadcasting: 5\nI0825 00:35:22.108344 2087 log.go:181] (0xc0002e3ef0) Reply frame received for 5\nI0825 00:35:22.157904 2087 log.go:181] (0xc0002e3ef0) Data frame received for 3\nI0825 00:35:22.157929 2087 log.go:181] (0xc000130000) (3) Data frame handling\nI0825 00:35:22.157941 2087 log.go:181] (0xc000130000) (3) Data frame sent\nI0825 00:35:22.157948 2087 log.go:181] (0xc0002e3ef0) Data frame received for 3\nI0825 00:35:22.157955 2087 log.go:181] (0xc000130000) (3) Data frame handling\nI0825 00:35:22.157974 2087 log.go:181] (0xc0002e3ef0) Data frame received for 5\nI0825 00:35:22.157988 2087 log.go:181] (0xc0001300a0) (5) Data frame handling\nI0825 00:35:22.157999 2087 log.go:181] (0xc0001300a0) (5) Data frame sent\nI0825 00:35:22.158007 2087 log.go:181] (0xc0002e3ef0) Data frame received for 5\nI0825 00:35:22.158016 2087 log.go:181] (0xc0001300a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0825 00:35:22.159255 2087 log.go:181] (0xc0002e3ef0) Data frame received for 1\nI0825 00:35:22.159272 2087 log.go:181] (0xc000130a00) (1) Data frame handling\nI0825 00:35:22.159284 2087 log.go:181] (0xc000130a00) (1) Data frame sent\nI0825 00:35:22.159296 2087 log.go:181] (0xc0002e3ef0) (0xc000130a00) Stream removed, broadcasting: 1\nI0825 00:35:22.159309 2087 log.go:181] (0xc0002e3ef0) Go away received\nI0825 00:35:22.159594 2087 log.go:181] (0xc0002e3ef0) (0xc000130a00) Stream removed, broadcasting: 1\nI0825 00:35:22.159617 2087 log.go:181] (0xc0002e3ef0) (0xc000130000) Stream removed, broadcasting: 3\nI0825 00:35:22.159626 2087 log.go:181] (0xc0002e3ef0) (0xc0001300a0) Stream removed, broadcasting: 5\n" Aug 25 00:35:22.165: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 25 00:35:22.165: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 25 00:35:22.169: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 25 00:35:22.169: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 25 00:35:22.169: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 25 00:35:22.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 25 00:35:22.407: INFO: stderr: "I0825 00:35:22.323206 2105 log.go:181] (0xc00003a4d0) (0xc00084c000) Create stream\nI0825 00:35:22.323259 2105 log.go:181] (0xc00003a4d0) (0xc00084c000) Stream added, broadcasting: 1\nI0825 00:35:22.325227 2105 log.go:181] (0xc00003a4d0) Reply frame received for 1\nI0825 00:35:22.325285 2105 log.go:181] (0xc00003a4d0) (0xc00084c0a0) Create stream\nI0825 00:35:22.325303 2105 log.go:181] (0xc00003a4d0) (0xc00084c0a0) Stream added, broadcasting: 3\nI0825 00:35:22.326182 2105 log.go:181] (0xc00003a4d0) Reply frame received for 3\nI0825 00:35:22.326216 2105 log.go:181] (0xc00003a4d0) (0xc000afc460) Create stream\nI0825 00:35:22.326235 2105 log.go:181] (0xc00003a4d0) (0xc000afc460) Stream added, broadcasting: 5\nI0825 00:35:22.327190 2105 log.go:181] (0xc00003a4d0) Reply frame received for 5\nI0825 00:35:22.396018 2105 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0825 00:35:22.396061 2105 log.go:181] (0xc000afc460) (5) Data frame handling\nI0825 00:35:22.396074 2105 log.go:181] (0xc000afc460) (5) Data frame sent\nI0825 00:35:22.396081 2105 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0825 00:35:22.396087 2105 log.go:181] (0xc000afc460) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0825 00:35:22.396109 2105 log.go:181] (0xc00003a4d0) Data frame received for 3\nI0825 00:35:22.396119 2105 log.go:181] (0xc00084c0a0) (3) Data frame handling\nI0825 00:35:22.396136 2105 log.go:181] (0xc00084c0a0) (3) Data frame sent\nI0825 00:35:22.396145 2105 log.go:181] (0xc00003a4d0) Data frame received for 3\nI0825 00:35:22.396152 2105 log.go:181] (0xc00084c0a0) (3) Data frame handling\nI0825 00:35:22.398417 2105 log.go:181] (0xc00003a4d0) Data frame received for 1\nI0825 00:35:22.398433 2105 log.go:181] (0xc00084c000) (1) Data frame handling\nI0825 00:35:22.398445 2105 log.go:181] (0xc00084c000) (1) Data frame sent\nI0825 00:35:22.398917 2105 log.go:181] (0xc00003a4d0) (0xc00084c000) Stream removed, broadcasting: 1\nI0825 00:35:22.399243 2105 log.go:181] (0xc00003a4d0) (0xc00084c000) Stream removed, broadcasting: 1\nI0825 00:35:22.399269 2105 log.go:181] (0xc00003a4d0) (0xc00084c0a0) Stream removed, broadcasting: 3\nI0825 00:35:22.399286 2105 log.go:181] (0xc00003a4d0) (0xc000afc460) Stream removed, broadcasting: 5\n" Aug 25 00:35:22.407: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 25 00:35:22.407: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 25 00:35:22.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 25 00:35:22.852: INFO: stderr: "I0825 00:35:22.706531 2123 log.go:181] (0xc0005c5600) (0xc000cb28c0) Create stream\nI0825 00:35:22.706608 2123 log.go:181] (0xc0005c5600) (0xc000cb28c0) Stream added, broadcasting: 1\nI0825 00:35:22.710047 2123 log.go:181] (0xc0005c5600) Reply frame received for 1\nI0825 00:35:22.710078 2123 log.go:181] (0xc0005c5600) (0xc000532500) Create stream\nI0825 00:35:22.710091 2123 log.go:181] (0xc0005c5600) (0xc000532500) Stream added, broadcasting: 3\nI0825 00:35:22.711015 2123 log.go:181] (0xc0005c5600) Reply frame received for 3\nI0825 00:35:22.711038 2123 log.go:181] (0xc0005c5600) (0xc000cb2960) Create stream\nI0825 00:35:22.711046 2123 log.go:181] (0xc0005c5600) (0xc000cb2960) Stream added, broadcasting: 5\nI0825 00:35:22.712063 2123 log.go:181] (0xc0005c5600) Reply frame received for 5\nI0825 00:35:22.785677 2123 log.go:181] (0xc0005c5600) Data frame received for 5\nI0825 00:35:22.785705 2123 log.go:181] (0xc000cb2960) (5) Data frame handling\nI0825 00:35:22.785755 2123 log.go:181] (0xc000cb2960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0825 00:35:22.844056 2123 log.go:181] (0xc0005c5600) Data frame received for 3\nI0825 00:35:22.844078 2123 log.go:181] (0xc000532500) (3) Data frame handling\nI0825 00:35:22.844091 2123 log.go:181] (0xc000532500) (3) Data frame sent\nI0825 00:35:22.844097 2123 log.go:181] (0xc0005c5600) Data frame received for 3\nI0825 00:35:22.844102 2123 log.go:181] (0xc000532500) (3) Data frame handling\nI0825 00:35:22.844355 2123 log.go:181] (0xc0005c5600) Data frame received for 5\nI0825 00:35:22.844366 2123 log.go:181] (0xc000cb2960) (5) Data frame handling\nI0825 00:35:22.845798 2123 log.go:181] (0xc0005c5600) Data frame received for 1\nI0825 00:35:22.845831 2123 log.go:181] (0xc000cb28c0) (1) Data frame handling\nI0825 00:35:22.845880 2123 log.go:181] (0xc000cb28c0) (1) Data frame sent\nI0825 00:35:22.845908 2123 log.go:181] (0xc0005c5600) (0xc000cb28c0) Stream removed, broadcasting: 1\nI0825 00:35:22.845928 2123 log.go:181] (0xc0005c5600) Go away received\nI0825 00:35:22.846265 2123 log.go:181] (0xc0005c5600) (0xc000cb28c0) Stream removed, broadcasting: 1\nI0825 00:35:22.846278 2123 log.go:181] (0xc0005c5600) (0xc000532500) Stream removed, broadcasting: 3\nI0825 00:35:22.846283 2123 log.go:181] (0xc0005c5600) (0xc000cb2960) Stream removed, broadcasting: 5\n" Aug 25 00:35:22.852: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 25 00:35:22.852: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 25 00:35:22.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 25 00:35:23.912: INFO: stderr: "I0825 00:35:23.387591 2141 log.go:181] (0xc000f294a0) (0xc00066ab40) Create stream\nI0825 00:35:23.387684 2141 log.go:181] (0xc000f294a0) (0xc00066ab40) Stream added, broadcasting: 1\nI0825 00:35:23.392559 2141 log.go:181] (0xc000f294a0) Reply frame received for 1\nI0825 00:35:23.392599 2141 log.go:181] (0xc000f294a0) (0xc00096a140) Create stream\nI0825 00:35:23.392609 2141 log.go:181] (0xc000f294a0) (0xc00096a140) Stream added, broadcasting: 3\nI0825 00:35:23.393696 2141 log.go:181] (0xc000f294a0) Reply frame received for 3\nI0825 00:35:23.393721 2141 log.go:181] (0xc000f294a0) (0xc00066b5e0) Create stream\nI0825 00:35:23.393729 2141 log.go:181] (0xc000f294a0) (0xc00066b5e0) Stream added, broadcasting: 5\nI0825 00:35:23.394610 2141 log.go:181] (0xc000f294a0) Reply frame received for 5\nI0825 00:35:23.460666 2141 log.go:181] (0xc000f294a0) Data frame received for 5\nI0825 00:35:23.460710 2141 log.go:181] (0xc00066b5e0) (5) Data frame handling\nI0825 00:35:23.460850 2141 log.go:181] (0xc00066b5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0825 00:35:23.901452 2141 log.go:181] (0xc000f294a0) Data frame received for 3\nI0825 00:35:23.901492 2141 log.go:181] (0xc00096a140) (3) Data frame handling\nI0825 00:35:23.901519 2141 log.go:181] (0xc00096a140) (3) Data frame sent\nI0825 00:35:23.901532 2141 log.go:181] (0xc000f294a0) Data frame received for 3\nI0825 00:35:23.901543 2141 log.go:181] (0xc00096a140) (3) Data frame handling\nI0825 00:35:23.901710 2141 log.go:181] (0xc000f294a0) Data frame received for 5\nI0825 00:35:23.901738 2141 log.go:181] (0xc00066b5e0) (5) Data frame handling\nI0825 00:35:23.903495 2141 log.go:181] (0xc000f294a0) Data frame received for 1\nI0825 00:35:23.903526 2141 log.go:181] (0xc00066ab40) (1) Data frame handling\nI0825 00:35:23.903562 2141 log.go:181] (0xc00066ab40) (1) Data frame sent\nI0825 00:35:23.903593 2141 log.go:181] (0xc000f294a0) (0xc00066ab40) Stream removed, broadcasting: 1\nI0825 00:35:23.903611 2141 log.go:181] (0xc000f294a0) Go away received\nI0825 00:35:23.904013 2141 log.go:181] (0xc000f294a0) (0xc00066ab40) Stream removed, broadcasting: 1\nI0825 00:35:23.904029 2141 log.go:181] (0xc000f294a0) (0xc00096a140) Stream removed, broadcasting: 3\nI0825 00:35:23.904037 2141 log.go:181] (0xc000f294a0) (0xc00066b5e0) Stream removed, broadcasting: 5\n" Aug 25 00:35:23.912: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 25 00:35:23.912: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 25 00:35:23.912: INFO: Waiting for statefulset status.replicas updated to 0 Aug 25 00:35:23.915: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Aug 25 00:35:34.248: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 25 00:35:34.248: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 25 00:35:34.248: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 25 00:35:34.985: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:34.985: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:34.985: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:34.985: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:34.985: INFO: Aug 25 00:35:34.985: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 25 00:35:36.834: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:36.834: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:36.834: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:36.834: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:36.834: INFO: Aug 25 00:35:36.834: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 25 00:35:39.083: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:39.083: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:39.083: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:39.083: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:39.083: INFO: Aug 25 00:35:39.083: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 25 00:35:40.713: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:40.713: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:40.713: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:40.713: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:40.713: INFO: Aug 25 00:35:40.713: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 25 00:35:41.962: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:41.962: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:41.963: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:41.963: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:41.963: INFO: Aug 25 00:35:41.963: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 25 00:35:42.970: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:42.970: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:42.970: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:42.970: INFO: ss-2 latest-worker Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:42.970: INFO: Aug 25 00:35:42.970: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 25 00:35:44.045: INFO: POD NODE PHASE GRACE CONDITIONS Aug 25 00:35:44.046: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:34:50 +0000 UTC }] Aug 25 00:35:44.046: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-25 00:35:10 +0000 UTC }] Aug 25 00:35:44.046: INFO: Aug 25 00:35:44.046: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2116 Aug 25 00:35:45.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:35:46.479: INFO: rc: 1 Aug 25 00:35:46.479: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:35:56.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:35:56.971: INFO: rc: 1 Aug 25 00:35:56.971: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:36:06.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:36:07.077: INFO: rc: 1 Aug 25 00:36:07.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:36:17.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:36:18.117: INFO: rc: 1 Aug 25 00:36:18.117: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:36:28.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:36:28.303: INFO: rc: 1 Aug 25 00:36:28.303: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:36:38.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:36:38.415: INFO: rc: 1 Aug 25 00:36:38.415: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:36:48.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:36:48.527: INFO: rc: 1 Aug 25 00:36:48.527: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:36:58.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:36:58.641: INFO: rc: 1 Aug 25 00:36:58.641: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:37:08.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:37:08.761: INFO: rc: 1 Aug 25 00:37:08.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:37:18.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:37:18.870: INFO: rc: 1 Aug 25 00:37:18.870: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:37:28.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:37:28.977: INFO: rc: 1 Aug 25 00:37:28.977: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:37:38.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:37:39.085: INFO: rc: 1 Aug 25 00:37:39.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:37:49.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:37:49.193: INFO: rc: 1 Aug 25 00:37:49.193: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:37:59.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:37:59.302: INFO: rc: 1 Aug 25 00:37:59.302: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:38:09.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:38:09.527: INFO: rc: 1 Aug 25 00:38:09.527: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:38:19.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:38:19.641: INFO: rc: 1 Aug 25 00:38:19.641: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:38:29.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:38:29.748: INFO: rc: 1 Aug 25 00:38:29.748: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:38:39.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:38:39.873: INFO: rc: 1 Aug 25 00:38:39.873: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:38:49.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:38:49.973: INFO: rc: 1 Aug 25 00:38:49.973: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:38:59.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:39:00.399: INFO: rc: 1 Aug 25 00:39:00.399: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:39:10.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:39:10.513: INFO: rc: 1 Aug 25 00:39:10.513: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:39:20.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:39:20.660: INFO: rc: 1 Aug 25 00:39:20.660: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:39:30.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:39:30.812: INFO: rc: 1 Aug 25 00:39:30.812: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:39:40.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:39:40.942: INFO: rc: 1 Aug 25 00:39:40.942: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:39:50.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:39:51.043: INFO: rc: 1 Aug 25 00:39:51.043: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:40:01.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:40:04.436: INFO: rc: 1 Aug 25 00:40:04.436: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:40:14.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:40:14.800: INFO: rc: 1 Aug 25 00:40:14.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:40:24.801: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:40:25.085: INFO: rc: 1 Aug 25 00:40:25.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:40:35.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:40:35.235: INFO: rc: 1 Aug 25 00:40:35.235: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:40:45.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:40:45.346: INFO: rc: 1 Aug 25 00:40:45.346: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Aug 25 00:40:55.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2116 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 00:40:55.465: INFO: rc: 1 Aug 25 00:40:55.465: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Aug 25 00:40:55.465: INFO: Scaling statefulset ss to 0 Aug 25 00:40:55.472: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 25 00:40:55.474: INFO: Deleting all statefulset in ns statefulset-2116 Aug 25 00:40:55.475: INFO: Scaling statefulset ss to 0 Aug 25 00:40:55.482: INFO: Waiting for statefulset status.replicas updated to 0 Aug 25 00:40:55.484: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:40:55.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2116" for this suite. • [SLOW TEST:366.311 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":219,"skipped":3666,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:40:56.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:06.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6337" for this suite. • [SLOW TEST:10.942 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3667,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:06.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:41:07.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035" in namespace "downward-api-2127" to be "Succeeded or Failed" Aug 25 00:41:07.266: INFO: Pod "downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035": Phase="Pending", Reason="", readiness=false. Elapsed: 90.679374ms Aug 25 00:41:09.270: INFO: Pod "downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095253486s Aug 25 00:41:11.335: INFO: Pod "downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.159558925s STEP: Saw pod success Aug 25 00:41:11.335: INFO: Pod "downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035" satisfied condition "Succeeded or Failed" Aug 25 00:41:11.338: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035 container client-container: STEP: delete the pod Aug 25 00:41:11.384: INFO: Waiting for pod downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035 to disappear Aug 25 00:41:11.400: INFO: Pod downwardapi-volume-dbdef2f7-b997-4da2-b1be-33a30d952035 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:11.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2127" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3668,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:11.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 25 00:41:11.517: INFO: Waiting up to 5m0s for pod "pod-95075ff5-5d77-452e-993c-8e2a6af59af1" in namespace "emptydir-5208" to be "Succeeded or Failed" Aug 25 00:41:11.520: INFO: Pod "pod-95075ff5-5d77-452e-993c-8e2a6af59af1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.519963ms Aug 25 00:41:13.526: INFO: Pod "pod-95075ff5-5d77-452e-993c-8e2a6af59af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00976001s Aug 25 00:41:15.584: INFO: Pod "pod-95075ff5-5d77-452e-993c-8e2a6af59af1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067303271s Aug 25 00:41:17.588: INFO: Pod "pod-95075ff5-5d77-452e-993c-8e2a6af59af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070972288s STEP: Saw pod success Aug 25 00:41:17.588: INFO: Pod "pod-95075ff5-5d77-452e-993c-8e2a6af59af1" satisfied condition "Succeeded or Failed" Aug 25 00:41:17.590: INFO: Trying to get logs from node latest-worker2 pod pod-95075ff5-5d77-452e-993c-8e2a6af59af1 container test-container: STEP: delete the pod Aug 25 00:41:17.663: INFO: Waiting for pod pod-95075ff5-5d77-452e-993c-8e2a6af59af1 to disappear Aug 25 00:41:17.703: INFO: Pod pod-95075ff5-5d77-452e-993c-8e2a6af59af1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:17.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5208" for this suite. • [SLOW TEST:6.301 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3673,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:17.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:21.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5989" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3679,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:21.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-543ac05a-4304-423f-8dc5-ca7905094e2c STEP: Creating a pod to test consume configMaps Aug 25 00:41:22.167: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496" in namespace "projected-6592" to be "Succeeded or Failed" Aug 25 00:41:22.179: INFO: Pod "pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496": Phase="Pending", Reason="", readiness=false. Elapsed: 12.23571ms Aug 25 00:41:24.188: INFO: Pod "pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020925278s Aug 25 00:41:26.191: INFO: Pod "pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024642324s Aug 25 00:41:28.196: INFO: Pod "pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028744115s STEP: Saw pod success Aug 25 00:41:28.196: INFO: Pod "pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496" satisfied condition "Succeeded or Failed" Aug 25 00:41:28.199: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496 container projected-configmap-volume-test: STEP: delete the pod Aug 25 00:41:28.223: INFO: Waiting for pod pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496 to disappear Aug 25 00:41:28.271: INFO: Pod pod-projected-configmaps-7f678098-c0eb-4631-835c-63b281c49496 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:28.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6592" for this suite. • [SLOW TEST:6.425 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3753,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:28.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Aug 25 00:41:28.356: INFO: created test-pod-1 Aug 25 00:41:28.365: INFO: created test-pod-2 Aug 25 00:41:28.428: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:28.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3007" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":225,"skipped":3764,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:28.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:41:29.165: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"cf531380-d8ba-4fd0-a798-3e58aa14be87", Controller:(*bool)(0xc0026beb6a), BlockOwnerDeletion:(*bool)(0xc0026beb6b)}} Aug 25 00:41:29.262: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"290297b3-506a-483c-a613-dcdf79f3529e", Controller:(*bool)(0xc002d95bf2), BlockOwnerDeletion:(*bool)(0xc002d95bf3)}} Aug 25 00:41:29.320: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0dbfefc1-0d38-418e-8dcc-3655fe8ab33c", Controller:(*bool)(0xc00275e9b2), BlockOwnerDeletion:(*bool)(0xc00275e9b3)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:35.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2957" for this suite. • [SLOW TEST:7.715 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":226,"skipped":3788,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:36.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 25 00:41:36.871: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:48.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8296" for this suite. • [SLOW TEST:12.653 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":227,"skipped":3788,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:49.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Aug 25 00:41:50.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config api-versions' Aug 25 00:41:50.686: INFO: stderr: "" Aug 25 00:41:50.686: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:41:50.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7141" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":228,"skipped":3792,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:41:50.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Aug 25 00:42:07.719: INFO: 10 pods remaining Aug 25 00:42:07.719: INFO: 5 pods has nil DeletionTimestamp Aug 25 00:42:07.719: INFO: STEP: Gathering metrics W0825 00:42:13.001924 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:43:16.149: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 25 00:43:16.149: INFO: Deleting pod "simpletest-rc-to-be-deleted-5ppqm" in namespace "gc-3347" Aug 25 00:43:16.256: INFO: Deleting pod "simpletest-rc-to-be-deleted-6c7zs" in namespace "gc-3347" Aug 25 00:43:16.502: INFO: Deleting pod "simpletest-rc-to-be-deleted-8gd6t" in namespace "gc-3347" Aug 25 00:43:16.650: INFO: Deleting pod "simpletest-rc-to-be-deleted-b7cg4" in namespace "gc-3347" Aug 25 00:43:16.822: INFO: Deleting pod "simpletest-rc-to-be-deleted-c2sbp" in namespace "gc-3347" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:43:17.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3347" for this suite. • [SLOW TEST:87.435 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":229,"skipped":3811,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:43:18.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 25 00:43:27.022: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:27.041: INFO: Pod pod-with-prestop-http-hook still exists Aug 25 00:43:29.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:29.052: INFO: Pod pod-with-prestop-http-hook still exists Aug 25 00:43:31.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:31.046: INFO: Pod pod-with-prestop-http-hook still exists Aug 25 00:43:33.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:33.046: INFO: Pod pod-with-prestop-http-hook still exists Aug 25 00:43:35.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:35.046: INFO: Pod pod-with-prestop-http-hook still exists Aug 25 00:43:37.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:37.046: INFO: Pod pod-with-prestop-http-hook still exists Aug 25 00:43:39.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:39.046: INFO: Pod pod-with-prestop-http-hook still exists Aug 25 00:43:41.041: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 25 00:43:41.551: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:43:41.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7555" for this suite. • [SLOW TEST:23.674 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":230,"skipped":3811,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:43:41.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2712 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 25 00:43:42.373: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 25 00:43:42.539: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 00:43:44.542: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 00:43:46.712: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 00:43:48.676: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 00:43:50.820: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 00:43:52.604: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 00:43:54.823: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 00:43:56.676: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 00:43:58.543: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 00:44:00.542: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 00:44:02.754: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 25 00:44:02.758: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 25 00:44:04.763: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 25 00:44:06.764: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 25 00:44:08.762: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 25 00:44:19.271: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.164:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2712 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 25 00:44:19.271: INFO: >>> kubeConfig: /root/.kube/config I0825 00:44:19.300977 7 log.go:181] (0xc0000f0370) (0xc0048b0320) Create stream I0825 00:44:19.301004 7 log.go:181] (0xc0000f0370) (0xc0048b0320) Stream added, broadcasting: 1 I0825 00:44:19.302377 7 log.go:181] (0xc0000f0370) Reply frame received for 1 I0825 00:44:19.302407 7 log.go:181] (0xc0000f0370) (0xc00148a000) Create stream I0825 00:44:19.302416 7 log.go:181] (0xc0000f0370) (0xc00148a000) Stream added, broadcasting: 3 I0825 00:44:19.303134 7 log.go:181] (0xc0000f0370) Reply frame received for 3 I0825 00:44:19.303154 7 log.go:181] (0xc0000f0370) (0xc00148a0a0) Create stream I0825 00:44:19.303164 7 log.go:181] (0xc0000f0370) (0xc00148a0a0) Stream added, broadcasting: 5 I0825 00:44:19.303697 7 log.go:181] (0xc0000f0370) Reply frame received for 5 I0825 00:44:19.375113 7 log.go:181] (0xc0000f0370) Data frame received for 3 I0825 00:44:19.375137 7 log.go:181] (0xc00148a000) (3) Data frame handling I0825 00:44:19.375144 7 log.go:181] (0xc00148a000) (3) Data frame sent I0825 00:44:19.375161 7 log.go:181] (0xc0000f0370) Data frame received for 5 I0825 00:44:19.375184 7 log.go:181] (0xc00148a0a0) (5) Data frame handling I0825 00:44:19.375244 7 log.go:181] (0xc0000f0370) Data frame received for 3 I0825 00:44:19.375289 7 log.go:181] (0xc00148a000) (3) Data frame handling I0825 00:44:19.376571 7 log.go:181] (0xc0000f0370) Data frame received for 1 I0825 00:44:19.376609 7 log.go:181] (0xc0048b0320) (1) Data frame handling I0825 00:44:19.376638 7 log.go:181] (0xc0048b0320) (1) Data frame sent I0825 00:44:19.376662 7 log.go:181] (0xc0000f0370) (0xc0048b0320) Stream removed, broadcasting: 1 I0825 00:44:19.376688 7 log.go:181] (0xc0000f0370) Go away received I0825 00:44:19.376825 7 log.go:181] (0xc0000f0370) (0xc0048b0320) Stream removed, broadcasting: 1 I0825 00:44:19.376849 7 log.go:181] (0xc0000f0370) (0xc00148a000) Stream removed, broadcasting: 3 I0825 00:44:19.376858 7 log.go:181] (0xc0000f0370) (0xc00148a0a0) Stream removed, broadcasting: 5 Aug 25 00:44:19.376: INFO: Found all expected endpoints: [netserver-0] Aug 25 00:44:19.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.14:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2712 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 25 00:44:19.508: INFO: >>> kubeConfig: /root/.kube/config I0825 00:44:19.937749 7 log.go:181] (0xc000ab4000) (0xc00148a140) Create stream I0825 00:44:19.937793 7 log.go:181] (0xc000ab4000) (0xc00148a140) Stream added, broadcasting: 1 I0825 00:44:19.939594 7 log.go:181] (0xc000ab4000) Reply frame received for 1 I0825 00:44:19.939643 7 log.go:181] (0xc000ab4000) (0xc00114c1e0) Create stream I0825 00:44:19.939654 7 log.go:181] (0xc000ab4000) (0xc00114c1e0) Stream added, broadcasting: 3 I0825 00:44:19.940526 7 log.go:181] (0xc000ab4000) Reply frame received for 3 I0825 00:44:19.940574 7 log.go:181] (0xc000ab4000) (0xc00114c320) Create stream I0825 00:44:19.940589 7 log.go:181] (0xc000ab4000) (0xc00114c320) Stream added, broadcasting: 5 I0825 00:44:19.941517 7 log.go:181] (0xc000ab4000) Reply frame received for 5 I0825 00:44:20.019707 7 log.go:181] (0xc000ab4000) Data frame received for 5 I0825 00:44:20.019750 7 log.go:181] (0xc00114c320) (5) Data frame handling I0825 00:44:20.019774 7 log.go:181] (0xc000ab4000) Data frame received for 3 I0825 00:44:20.019787 7 log.go:181] (0xc00114c1e0) (3) Data frame handling I0825 00:44:20.019796 7 log.go:181] (0xc00114c1e0) (3) Data frame sent I0825 00:44:20.019801 7 log.go:181] (0xc000ab4000) Data frame received for 3 I0825 00:44:20.019809 7 log.go:181] (0xc00114c1e0) (3) Data frame handling I0825 00:44:20.021022 7 log.go:181] (0xc000ab4000) Data frame received for 1 I0825 00:44:20.021062 7 log.go:181] (0xc00148a140) (1) Data frame handling I0825 00:44:20.021083 7 log.go:181] (0xc00148a140) (1) Data frame sent I0825 00:44:20.021098 7 log.go:181] (0xc000ab4000) (0xc00148a140) Stream removed, broadcasting: 1 I0825 00:44:20.021129 7 log.go:181] (0xc000ab4000) Go away received I0825 00:44:20.021248 7 log.go:181] (0xc000ab4000) (0xc00148a140) Stream removed, broadcasting: 1 I0825 00:44:20.021277 7 log.go:181] (0xc000ab4000) (0xc00114c1e0) Stream removed, broadcasting: 3 I0825 00:44:20.021286 7 log.go:181] (0xc000ab4000) (0xc00114c320) Stream removed, broadcasting: 5 Aug 25 00:44:20.021: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:44:20.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2712" for this suite. • [SLOW TEST:38.083 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":231,"skipped":3857,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:44:20.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0825 00:44:32.842215 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:45:35.284: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:45:35.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8501" for this suite. • [SLOW TEST:75.262 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":232,"skipped":3858,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:45:35.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Aug 25 00:45:36.160: INFO: Waiting up to 5m0s for pod "var-expansion-e9513638-f63a-40fd-850c-72e9990767aa" in namespace "var-expansion-9164" to be "Succeeded or Failed" Aug 25 00:45:36.337: INFO: Pod "var-expansion-e9513638-f63a-40fd-850c-72e9990767aa": Phase="Pending", Reason="", readiness=false. Elapsed: 176.334547ms Aug 25 00:45:38.341: INFO: Pod "var-expansion-e9513638-f63a-40fd-850c-72e9990767aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180844956s Aug 25 00:45:40.393: INFO: Pod "var-expansion-e9513638-f63a-40fd-850c-72e9990767aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232532218s Aug 25 00:45:42.509: INFO: Pod "var-expansion-e9513638-f63a-40fd-850c-72e9990767aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.348847203s STEP: Saw pod success Aug 25 00:45:42.509: INFO: Pod "var-expansion-e9513638-f63a-40fd-850c-72e9990767aa" satisfied condition "Succeeded or Failed" Aug 25 00:45:42.512: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e9513638-f63a-40fd-850c-72e9990767aa container dapi-container: STEP: delete the pod Aug 25 00:45:43.098: INFO: Waiting for pod var-expansion-e9513638-f63a-40fd-850c-72e9990767aa to disappear Aug 25 00:45:43.324: INFO: Pod var-expansion-e9513638-f63a-40fd-850c-72e9990767aa no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:45:43.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9164" for this suite. • [SLOW TEST:8.341 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3869,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:45:43.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:45:44.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669" in namespace "projected-8586" to be "Succeeded or Failed" Aug 25 00:45:44.791: INFO: Pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669": Phase="Pending", Reason="", readiness=false. Elapsed: 517.304598ms Aug 25 00:45:46.953: INFO: Pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.679332327s Aug 25 00:45:49.289: INFO: Pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669": Phase="Pending", Reason="", readiness=false. Elapsed: 5.015354963s Aug 25 00:45:51.318: INFO: Pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669": Phase="Pending", Reason="", readiness=false. Elapsed: 7.044149578s Aug 25 00:45:53.379: INFO: Pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669": Phase="Running", Reason="", readiness=true. Elapsed: 9.10523614s Aug 25 00:45:55.606: INFO: Pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.332079122s STEP: Saw pod success Aug 25 00:45:55.606: INFO: Pod "downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669" satisfied condition "Succeeded or Failed" Aug 25 00:45:55.611: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669 container client-container: STEP: delete the pod Aug 25 00:45:55.668: INFO: Waiting for pod downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669 to disappear Aug 25 00:45:56.011: INFO: Pod downwardapi-volume-183c7f34-c318-42cb-906a-ce50e84fb669 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:45:56.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8586" for this suite. • [SLOW TEST:12.385 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":234,"skipped":3898,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:45:56.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:45:58.559: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 25 00:45:58.589: INFO: Number of nodes with available pods: 0 Aug 25 00:45:58.589: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 25 00:45:58.923: INFO: Number of nodes with available pods: 0 Aug 25 00:45:58.923: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:00.319: INFO: Number of nodes with available pods: 0 Aug 25 00:46:00.319: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:01.159: INFO: Number of nodes with available pods: 0 Aug 25 00:46:01.159: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:01.999: INFO: Number of nodes with available pods: 0 Aug 25 00:46:01.999: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:02.989: INFO: Number of nodes with available pods: 0 Aug 25 00:46:02.989: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:04.438: INFO: Number of nodes with available pods: 0 Aug 25 00:46:04.438: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:05.049: INFO: Number of nodes with available pods: 0 Aug 25 00:46:05.049: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:06.146: INFO: Number of nodes with available pods: 0 Aug 25 00:46:06.146: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:07.050: INFO: Number of nodes with available pods: 0 Aug 25 00:46:07.050: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:08.306: INFO: Number of nodes with available pods: 1 Aug 25 00:46:08.306: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 25 00:46:08.563: INFO: Number of nodes with available pods: 1 Aug 25 00:46:08.563: INFO: Number of running nodes: 0, number of available pods: 1 Aug 25 00:46:09.672: INFO: Number of nodes with available pods: 0 Aug 25 00:46:09.672: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 25 00:46:09.768: INFO: Number of nodes with available pods: 0 Aug 25 00:46:09.769: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:10.947: INFO: Number of nodes with available pods: 0 Aug 25 00:46:10.947: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:12.460: INFO: Number of nodes with available pods: 0 Aug 25 00:46:12.460: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:13.152: INFO: Number of nodes with available pods: 0 Aug 25 00:46:13.152: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:15.212: INFO: Number of nodes with available pods: 0 Aug 25 00:46:15.212: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:16.224: INFO: Number of nodes with available pods: 0 Aug 25 00:46:16.224: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:16.773: INFO: Number of nodes with available pods: 0 Aug 25 00:46:16.773: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:17.954: INFO: Number of nodes with available pods: 0 Aug 25 00:46:17.954: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:18.938: INFO: Number of nodes with available pods: 0 Aug 25 00:46:18.938: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:20.013: INFO: Number of nodes with available pods: 0 Aug 25 00:46:20.013: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:21.081: INFO: Number of nodes with available pods: 0 Aug 25 00:46:21.081: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:21.996: INFO: Number of nodes with available pods: 0 Aug 25 00:46:21.996: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:22.772: INFO: Number of nodes with available pods: 0 Aug 25 00:46:22.772: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:23.870: INFO: Number of nodes with available pods: 0 Aug 25 00:46:23.870: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:24.772: INFO: Number of nodes with available pods: 0 Aug 25 00:46:24.772: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:25.882: INFO: Number of nodes with available pods: 0 Aug 25 00:46:25.882: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:26.948: INFO: Number of nodes with available pods: 0 Aug 25 00:46:26.948: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:27.773: INFO: Number of nodes with available pods: 0 Aug 25 00:46:27.773: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 00:46:28.773: INFO: Number of nodes with available pods: 1 Aug 25 00:46:28.773: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4565, will wait for the garbage collector to delete the pods Aug 25 00:46:28.835: INFO: Deleting DaemonSet.extensions daemon-set took: 6.258839ms Aug 25 00:46:29.235: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.204523ms Aug 25 00:46:39.739: INFO: Number of nodes with available pods: 0 Aug 25 00:46:39.739: INFO: Number of running nodes: 0, number of available pods: 0 Aug 25 00:46:39.742: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4565/daemonsets","resourceVersion":"3436812"},"items":null} Aug 25 00:46:39.745: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4565/pods","resourceVersion":"3436812"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:46:39.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4565" for this suite. • [SLOW TEST:43.827 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":235,"skipped":3904,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:46:39.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 25 00:46:39.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4767' Aug 25 00:46:40.268: INFO: stderr: "" Aug 25 00:46:40.268: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 25 00:46:40.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4767' Aug 25 00:46:40.399: INFO: stderr: "" Aug 25 00:46:40.399: INFO: stdout: "update-demo-nautilus-qj9vt update-demo-nautilus-w87bm " Aug 25 00:46:40.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qj9vt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4767' Aug 25 00:46:40.511: INFO: stderr: "" Aug 25 00:46:40.511: INFO: stdout: "" Aug 25 00:46:40.511: INFO: update-demo-nautilus-qj9vt is created but not running Aug 25 00:46:45.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4767' Aug 25 00:46:45.767: INFO: stderr: "" Aug 25 00:46:45.767: INFO: stdout: "update-demo-nautilus-qj9vt update-demo-nautilus-w87bm " Aug 25 00:46:45.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qj9vt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4767' Aug 25 00:46:45.884: INFO: stderr: "" Aug 25 00:46:45.884: INFO: stdout: "" Aug 25 00:46:45.884: INFO: update-demo-nautilus-qj9vt is created but not running Aug 25 00:46:50.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4767' Aug 25 00:46:51.011: INFO: stderr: "" Aug 25 00:46:51.011: INFO: stdout: "update-demo-nautilus-qj9vt update-demo-nautilus-w87bm " Aug 25 00:46:51.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qj9vt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4767' Aug 25 00:46:51.111: INFO: stderr: "" Aug 25 00:46:51.111: INFO: stdout: "true" Aug 25 00:46:51.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qj9vt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4767' Aug 25 00:46:51.219: INFO: stderr: "" Aug 25 00:46:51.219: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 25 00:46:51.219: INFO: validating pod update-demo-nautilus-qj9vt Aug 25 00:46:51.223: INFO: got data: { "image": "nautilus.jpg" } Aug 25 00:46:51.223: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 25 00:46:51.223: INFO: update-demo-nautilus-qj9vt is verified up and running Aug 25 00:46:51.223: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w87bm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4767' Aug 25 00:46:51.317: INFO: stderr: "" Aug 25 00:46:51.317: INFO: stdout: "true" Aug 25 00:46:51.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w87bm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4767' Aug 25 00:46:51.417: INFO: stderr: "" Aug 25 00:46:51.417: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 25 00:46:51.417: INFO: validating pod update-demo-nautilus-w87bm Aug 25 00:46:51.420: INFO: got data: { "image": "nautilus.jpg" } Aug 25 00:46:51.420: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 25 00:46:51.420: INFO: update-demo-nautilus-w87bm is verified up and running STEP: using delete to clean up resources Aug 25 00:46:51.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4767' Aug 25 00:46:51.543: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:46:51.543: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 25 00:46:51.543: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4767' Aug 25 00:46:51.652: INFO: stderr: "No resources found in kubectl-4767 namespace.\n" Aug 25 00:46:51.652: INFO: stdout: "" Aug 25 00:46:51.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4767 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 25 00:46:51.832: INFO: stderr: "" Aug 25 00:46:51.832: INFO: stdout: "update-demo-nautilus-qj9vt\nupdate-demo-nautilus-w87bm\n" Aug 25 00:46:52.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4767' Aug 25 00:46:52.561: INFO: stderr: "No resources found in kubectl-4767 namespace.\n" Aug 25 00:46:52.561: INFO: stdout: "" Aug 25 00:46:52.561: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4767 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 25 00:46:52.670: INFO: stderr: "" Aug 25 00:46:52.670: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:46:52.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4767" for this suite. • [SLOW TEST:12.837 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":236,"skipped":3910,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:46:52.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 25 00:46:53.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6174' Aug 25 00:46:53.353: INFO: stderr: "" Aug 25 00:46:53.353: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Aug 25 00:46:53.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-6174' Aug 25 00:46:53.596: INFO: stderr: "" Aug 25 00:46:53.596: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-25T00:46:53Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-25T00:46:53Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6174\",\n \"resourceVersion\": \"3436905\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6174/pods/e2e-test-httpd-pod\",\n \"uid\": \"58218139-e195-4bda-8887-ab6f2b609c9d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-5pgjx\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-5pgjx\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-5pgjx\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-25T00:46:53Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Aug 25 00:46:53.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-6174' Aug 25 00:46:54.721: INFO: stderr: "W0825 00:46:53.667277 3023 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Aug 25 00:46:54.721: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Aug 25 00:46:54.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6174' Aug 25 00:46:59.976: INFO: stderr: "" Aug 25 00:46:59.976: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:46:59.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6174" for this suite. • [SLOW TEST:7.322 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":237,"skipped":3918,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:47:00.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 25 00:47:00.242: INFO: Waiting up to 1m0s for all nodes to be ready Aug 25 00:48:00.267: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 25 00:48:00.304: INFO: Created pod: pod0-sched-preemption-low-priority Aug 25 00:48:00.336: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:48:36.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7074" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:96.756 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":238,"skipped":3929,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:48:36.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:48:36.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3" in namespace "downward-api-92" to be "Succeeded or Failed" Aug 25 00:48:36.877: INFO: Pod "downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3": Phase="Pending", Reason="", readiness=false. Elapsed: 49.42896ms Aug 25 00:48:38.882: INFO: Pod "downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054222766s Aug 25 00:48:40.886: INFO: Pod "downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058231469s Aug 25 00:48:43.082: INFO: Pod "downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.254439003s STEP: Saw pod success Aug 25 00:48:43.082: INFO: Pod "downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3" satisfied condition "Succeeded or Failed" Aug 25 00:48:43.086: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3 container client-container: STEP: delete the pod Aug 25 00:48:43.379: INFO: Waiting for pod downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3 to disappear Aug 25 00:48:43.473: INFO: Pod downwardapi-volume-19f52fc0-9f53-4c08-8e92-9d3779b9daa3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:48:43.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-92" for this suite. • [SLOW TEST:6.815 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":239,"skipped":3992,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:48:43.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:48:44.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657" in namespace "downward-api-7215" to be "Succeeded or Failed" Aug 25 00:48:44.641: INFO: Pod "downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657": Phase="Pending", Reason="", readiness=false. Elapsed: 230.906649ms Aug 25 00:48:46.776: INFO: Pod "downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.366640364s Aug 25 00:48:48.781: INFO: Pod "downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371004699s Aug 25 00:48:50.807: INFO: Pod "downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396882271s Aug 25 00:48:52.825: INFO: Pod "downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.415250272s STEP: Saw pod success Aug 25 00:48:52.825: INFO: Pod "downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657" satisfied condition "Succeeded or Failed" Aug 25 00:48:52.828: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657 container client-container: STEP: delete the pod Aug 25 00:48:53.153: INFO: Waiting for pod downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657 to disappear Aug 25 00:48:53.391: INFO: Pod downwardapi-volume-c2819159-1679-46ee-a6d7-c85850983657 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:48:53.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7215" for this suite. • [SLOW TEST:9.821 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":240,"skipped":3992,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:48:53.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:48:53.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9380" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":241,"skipped":3995,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:48:53.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 25 00:48:54.034: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 25 00:49:06.708: INFO: >>> kubeConfig: /root/.kube/config Aug 25 00:49:09.692: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:49:24.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6371" for this suite. • [SLOW TEST:30.381 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":242,"skipped":3997,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:49:24.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0825 00:49:26.029176 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 00:50:28.048: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:50:28.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5317" for this suite. • [SLOW TEST:63.679 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":243,"skipped":4041,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:50:28.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 25 00:50:42.468: INFO: DNS probes using dns-test-c43988eb-632d-43fa-9735-309373288055 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 25 00:50:55.723: INFO: File wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 25 00:50:55.726: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 25 00:50:55.726: INFO: Lookups using dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 failed for: [wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local] Aug 25 00:51:00.731: INFO: File wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 25 00:51:00.735: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 25 00:51:00.735: INFO: Lookups using dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 failed for: [wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local] Aug 25 00:51:05.730: INFO: File wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 25 00:51:05.733: INFO: File jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local from pod dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 25 00:51:05.733: INFO: Lookups using dns-6244/dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 failed for: [wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local] Aug 25 00:51:10.762: INFO: DNS probes using dns-test-4d2042bc-6fe5-4406-a014-7bc78f8c92a9 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6244.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6244.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 25 00:51:25.606: INFO: DNS probes using dns-test-e5c0d9f3-c0f6-4ca4-a5e8-b47b118ee56f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:51:25.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6244" for this suite. • [SLOW TEST:57.682 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":244,"skipped":4050,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:51:25.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-cdfeefb7-3880-49bc-a1a9-ad9ab07b3a73 STEP: Creating secret with name s-test-opt-upd-5e7cc6b7-1bed-4aee-aba8-9bd66975b6c3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cdfeefb7-3880-49bc-a1a9-ad9ab07b3a73 STEP: Updating secret s-test-opt-upd-5e7cc6b7-1bed-4aee-aba8-9bd66975b6c3 STEP: Creating secret with name s-test-opt-create-d8e21625-b8c2-4311-9aee-1ce954ce78b2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:52:46.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5477" for this suite. • [SLOW TEST:80.503 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":4071,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:52:46.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 25 00:52:46.401: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:52:57.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9115" for this suite. • [SLOW TEST:11.457 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":246,"skipped":4089,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:52:57.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:52:58.355: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:53:00.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913578, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913578, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913578, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913578, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:53:03.448: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:53:03.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4933" for this suite. STEP: Destroying namespace "webhook-4933-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.923 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":247,"skipped":4129,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:53:03.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-bb6339d2-1581-477f-95e7-73ba554463d2 STEP: Creating a pod to test consume secrets Aug 25 00:53:03.782: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1" in namespace "projected-2322" to be "Succeeded or Failed" Aug 25 00:53:03.823: INFO: Pod "pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.564527ms Aug 25 00:53:05.827: INFO: Pod "pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044938156s Aug 25 00:53:07.831: INFO: Pod "pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048584216s STEP: Saw pod success Aug 25 00:53:07.831: INFO: Pod "pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1" satisfied condition "Succeeded or Failed" Aug 25 00:53:07.833: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1 container projected-secret-volume-test: STEP: delete the pod Aug 25 00:53:08.075: INFO: Waiting for pod pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1 to disappear Aug 25 00:53:08.126: INFO: Pod pod-projected-secrets-ef887917-a5a2-4e39-9632-cc63b38f5ee1 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:53:08.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2322" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":248,"skipped":4147,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:53:08.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 25 00:53:08.241: INFO: Waiting up to 5m0s for pod "pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62" in namespace "emptydir-1942" to be "Succeeded or Failed" Aug 25 00:53:08.245: INFO: Pod "pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62": Phase="Pending", Reason="", readiness=false. Elapsed: 3.695283ms Aug 25 00:53:10.379: INFO: Pod "pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137667295s Aug 25 00:53:12.516: INFO: Pod "pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.274351026s STEP: Saw pod success Aug 25 00:53:12.516: INFO: Pod "pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62" satisfied condition "Succeeded or Failed" Aug 25 00:53:12.519: INFO: Trying to get logs from node latest-worker2 pod pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62 container test-container: STEP: delete the pod Aug 25 00:53:12.566: INFO: Waiting for pod pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62 to disappear Aug 25 00:53:12.648: INFO: Pod pod-ba4b329d-0d7a-44df-944a-c49ee39c8a62 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:53:12.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1942" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":4153,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:53:12.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:53:23.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7658" for this suite. • [SLOW TEST:11.118 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":250,"skipped":4157,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:53:23.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 25 00:53:23.835: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 25 00:53:23.875: INFO: Waiting for terminating namespaces to be deleted... Aug 25 00:53:23.878: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 25 00:53:23.883: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 00:53:23.883: INFO: Container app ready: true, restart count 0 Aug 25 00:53:23.883: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:53:23.883: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:53:23.883: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:53:23.883: INFO: Container kube-proxy ready: true, restart count 0 Aug 25 00:53:23.883: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 25 00:53:23.889: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 00:53:23.889: INFO: Container app ready: true, restart count 0 Aug 25 00:53:23.889: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 00:53:23.889: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 00:53:23.889: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 25 00:53:23.889: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162e5caa37f3ba6a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.162e5caa3ab6e64f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:53:24.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4653" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":251,"skipped":4182,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:53:24.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3432 Aug 25 00:53:29.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3432 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 25 00:53:32.815: INFO: stderr: "I0825 00:53:32.723033 3055 log.go:181] (0xc00018c420) (0xc00055c1e0) Create stream\nI0825 00:53:32.723095 3055 log.go:181] (0xc00018c420) (0xc00055c1e0) Stream added, broadcasting: 1\nI0825 00:53:32.725051 3055 log.go:181] (0xc00018c420) Reply frame received for 1\nI0825 00:53:32.725090 3055 log.go:181] (0xc00018c420) (0xc000e8c000) Create stream\nI0825 00:53:32.725103 3055 log.go:181] (0xc00018c420) (0xc000e8c000) Stream added, broadcasting: 3\nI0825 00:53:32.726134 3055 log.go:181] (0xc00018c420) Reply frame received for 3\nI0825 00:53:32.726169 3055 log.go:181] (0xc00018c420) (0xc00055c280) Create stream\nI0825 00:53:32.726182 3055 log.go:181] (0xc00018c420) (0xc00055c280) Stream added, broadcasting: 5\nI0825 00:53:32.727170 3055 log.go:181] (0xc00018c420) Reply frame received for 5\nI0825 00:53:32.799165 3055 log.go:181] (0xc00018c420) Data frame received for 5\nI0825 00:53:32.799195 3055 log.go:181] (0xc00055c280) (5) Data frame handling\nI0825 00:53:32.799218 3055 log.go:181] (0xc00055c280) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0825 00:53:32.805139 3055 log.go:181] (0xc00018c420) Data frame received for 3\nI0825 00:53:32.805164 3055 log.go:181] (0xc000e8c000) (3) Data frame handling\nI0825 00:53:32.805194 3055 log.go:181] (0xc000e8c000) (3) Data frame sent\nI0825 00:53:32.806007 3055 log.go:181] (0xc00018c420) Data frame received for 5\nI0825 00:53:32.806029 3055 log.go:181] (0xc00018c420) Data frame received for 3\nI0825 00:53:32.806053 3055 log.go:181] (0xc000e8c000) (3) Data frame handling\nI0825 00:53:32.806074 3055 log.go:181] (0xc00055c280) (5) Data frame handling\nI0825 00:53:32.807389 3055 log.go:181] (0xc00018c420) Data frame received for 1\nI0825 00:53:32.807411 3055 log.go:181] (0xc00055c1e0) (1) Data frame handling\nI0825 00:53:32.807422 3055 log.go:181] (0xc00055c1e0) (1) Data frame sent\nI0825 00:53:32.807439 3055 log.go:181] (0xc00018c420) (0xc00055c1e0) Stream removed, broadcasting: 1\nI0825 00:53:32.807517 3055 log.go:181] (0xc00018c420) Go away received\nI0825 00:53:32.807826 3055 log.go:181] (0xc00018c420) (0xc00055c1e0) Stream removed, broadcasting: 1\nI0825 00:53:32.807840 3055 log.go:181] (0xc00018c420) (0xc000e8c000) Stream removed, broadcasting: 3\nI0825 00:53:32.807848 3055 log.go:181] (0xc00018c420) (0xc00055c280) Stream removed, broadcasting: 5\n" Aug 25 00:53:32.815: INFO: stdout: "iptables" Aug 25 00:53:32.815: INFO: proxyMode: iptables Aug 25 00:53:32.820: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 25 00:53:32.889: INFO: Pod kube-proxy-mode-detector still exists Aug 25 00:53:34.889: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 25 00:53:34.906: INFO: Pod kube-proxy-mode-detector still exists Aug 25 00:53:36.889: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 25 00:53:36.958: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3432 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3432 I0825 00:53:37.193178 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3432, replica count: 3 I0825 00:53:40.243622 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:53:43.243905 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:53:46.244182 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 00:53:49.244480 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 25 00:53:49.251: INFO: Creating new exec pod Aug 25 00:53:54.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3432 execpod-affinityrpbc2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Aug 25 00:53:54.540: INFO: stderr: "I0825 00:53:54.430134 3074 log.go:181] (0xc0006c18c0) (0xc0005dea00) Create stream\nI0825 00:53:54.430188 3074 log.go:181] (0xc0006c18c0) (0xc0005dea00) Stream added, broadcasting: 1\nI0825 00:53:54.432865 3074 log.go:181] (0xc0006c18c0) Reply frame received for 1\nI0825 00:53:54.432912 3074 log.go:181] (0xc0006c18c0) (0xc0005deaa0) Create stream\nI0825 00:53:54.432924 3074 log.go:181] (0xc0006c18c0) (0xc0005deaa0) Stream added, broadcasting: 3\nI0825 00:53:54.433949 3074 log.go:181] (0xc0006c18c0) Reply frame received for 3\nI0825 00:53:54.433989 3074 log.go:181] (0xc0006c18c0) (0xc0006385a0) Create stream\nI0825 00:53:54.434004 3074 log.go:181] (0xc0006c18c0) (0xc0006385a0) Stream added, broadcasting: 5\nI0825 00:53:54.435110 3074 log.go:181] (0xc0006c18c0) Reply frame received for 5\nI0825 00:53:54.527453 3074 log.go:181] (0xc0006c18c0) Data frame received for 5\nI0825 00:53:54.527500 3074 log.go:181] (0xc0006385a0) (5) Data frame handling\nI0825 00:53:54.527516 3074 log.go:181] (0xc0006385a0) (5) Data frame sent\nI0825 00:53:54.527527 3074 log.go:181] (0xc0006c18c0) Data frame received for 5\nI0825 00:53:54.527541 3074 log.go:181] (0xc0006385a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0825 00:53:54.527625 3074 log.go:181] (0xc0006c18c0) Data frame received for 3\nI0825 00:53:54.527668 3074 log.go:181] (0xc0005deaa0) (3) Data frame handling\nI0825 00:53:54.530805 3074 log.go:181] (0xc0006c18c0) Data frame received for 1\nI0825 00:53:54.530834 3074 log.go:181] (0xc0005dea00) (1) Data frame handling\nI0825 00:53:54.530847 3074 log.go:181] (0xc0005dea00) (1) Data frame sent\nI0825 00:53:54.530859 3074 log.go:181] (0xc0006c18c0) (0xc0005dea00) Stream removed, broadcasting: 1\nI0825 00:53:54.530872 3074 log.go:181] (0xc0006c18c0) Go away received\nI0825 00:53:54.531287 3074 log.go:181] (0xc0006c18c0) (0xc0005dea00) Stream removed, broadcasting: 1\nI0825 00:53:54.531304 3074 log.go:181] (0xc0006c18c0) (0xc0005deaa0) Stream removed, broadcasting: 3\nI0825 00:53:54.531316 3074 log.go:181] (0xc0006c18c0) (0xc0006385a0) Stream removed, broadcasting: 5\n" Aug 25 00:53:54.540: INFO: stdout: "" Aug 25 00:53:54.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3432 execpod-affinityrpbc2 -- /bin/sh -x -c nc -zv -t -w 2 10.104.146.44 80' Aug 25 00:53:54.742: INFO: stderr: "I0825 00:53:54.659498 3092 log.go:181] (0xc000dc4d10) (0xc000e185a0) Create stream\nI0825 00:53:54.659547 3092 log.go:181] (0xc000dc4d10) (0xc000e185a0) Stream added, broadcasting: 1\nI0825 00:53:54.664314 3092 log.go:181] (0xc000dc4d10) Reply frame received for 1\nI0825 00:53:54.664355 3092 log.go:181] (0xc000dc4d10) (0xc000e18000) Create stream\nI0825 00:53:54.664367 3092 log.go:181] (0xc000dc4d10) (0xc000e18000) Stream added, broadcasting: 3\nI0825 00:53:54.665430 3092 log.go:181] (0xc000dc4d10) Reply frame received for 3\nI0825 00:53:54.665482 3092 log.go:181] (0xc000dc4d10) (0xc000b38aa0) Create stream\nI0825 00:53:54.665504 3092 log.go:181] (0xc000dc4d10) (0xc000b38aa0) Stream added, broadcasting: 5\nI0825 00:53:54.666434 3092 log.go:181] (0xc000dc4d10) Reply frame received for 5\nI0825 00:53:54.729000 3092 log.go:181] (0xc000dc4d10) Data frame received for 5\nI0825 00:53:54.729023 3092 log.go:181] (0xc000b38aa0) (5) Data frame handling\nI0825 00:53:54.729037 3092 log.go:181] (0xc000b38aa0) (5) Data frame sent\n+ nc -zv -t -w 2 10.104.146.44 80\nConnection to 10.104.146.44 80 port [tcp/http] succeeded!\nI0825 00:53:54.729192 3092 log.go:181] (0xc000dc4d10) Data frame received for 5\nI0825 00:53:54.729277 3092 log.go:181] (0xc000b38aa0) (5) Data frame handling\nI0825 00:53:54.729321 3092 log.go:181] (0xc000dc4d10) Data frame received for 3\nI0825 00:53:54.729340 3092 log.go:181] (0xc000e18000) (3) Data frame handling\nI0825 00:53:54.730620 3092 log.go:181] (0xc000dc4d10) Data frame received for 1\nI0825 00:53:54.730645 3092 log.go:181] (0xc000e185a0) (1) Data frame handling\nI0825 00:53:54.730668 3092 log.go:181] (0xc000e185a0) (1) Data frame sent\nI0825 00:53:54.730831 3092 log.go:181] (0xc000dc4d10) (0xc000e185a0) Stream removed, broadcasting: 1\nI0825 00:53:54.731045 3092 log.go:181] (0xc000dc4d10) Go away received\nI0825 00:53:54.731140 3092 log.go:181] (0xc000dc4d10) (0xc000e185a0) Stream removed, broadcasting: 1\nI0825 00:53:54.731160 3092 log.go:181] (0xc000dc4d10) (0xc000e18000) Stream removed, broadcasting: 3\nI0825 00:53:54.731173 3092 log.go:181] (0xc000dc4d10) (0xc000b38aa0) Stream removed, broadcasting: 5\n" Aug 25 00:53:54.742: INFO: stdout: "" Aug 25 00:53:54.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3432 execpod-affinityrpbc2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.104.146.44:80/ ; done' Aug 25 00:53:55.060: INFO: stderr: "I0825 00:53:54.888850 3110 log.go:181] (0xc000578d10) (0xc000a2a820) Create stream\nI0825 00:53:54.888940 3110 log.go:181] (0xc000578d10) (0xc000a2a820) Stream added, broadcasting: 1\nI0825 00:53:54.893370 3110 log.go:181] (0xc000578d10) Reply frame received for 1\nI0825 00:53:54.893408 3110 log.go:181] (0xc000578d10) (0xc000209220) Create stream\nI0825 00:53:54.893416 3110 log.go:181] (0xc000578d10) (0xc000209220) Stream added, broadcasting: 3\nI0825 00:53:54.894147 3110 log.go:181] (0xc000578d10) Reply frame received for 3\nI0825 00:53:54.894175 3110 log.go:181] (0xc000578d10) (0xc000a2a000) Create stream\nI0825 00:53:54.894183 3110 log.go:181] (0xc000578d10) (0xc000a2a000) Stream added, broadcasting: 5\nI0825 00:53:54.894871 3110 log.go:181] (0xc000578d10) Reply frame received for 5\nI0825 00:53:54.954725 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.954767 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.954791 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.954856 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.954890 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.954907 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.958357 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.958394 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.958438 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.958897 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.958921 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.958939 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.958947 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.958960 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.958967 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.962585 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.962622 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.962663 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.962907 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.962926 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.962934 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.962963 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.962978 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.962989 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.967628 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.967645 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.967659 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.968421 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.968450 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.968467 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.968495 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.968519 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.968543 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.974942 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.974974 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.975001 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.975920 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.975985 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.976017 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.976045 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.976061 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.976081 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.982537 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.982567 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.982591 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.983240 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.983269 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.983284 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.983303 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.983314 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.983325 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:54.983337 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.983346 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.983378 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:54.989268 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.989291 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.989309 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.990045 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.990063 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.990071 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:54.990076 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.990087 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.990106 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:54.990202 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.990219 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.990236 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.997204 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.997234 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.997258 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.998318 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:54.998338 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:54.998348 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:54.998361 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.998369 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:54.998378 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:54.998387 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:54.998394 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:54.998414 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:55.003094 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.003108 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.003119 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.003872 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.003895 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.003911 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.003937 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.003964 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.003988 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:55.010030 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.010052 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.010076 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.010473 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.010491 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.010504 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.010542 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.010566 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.010583 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:55.010596 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.010606 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.010648 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:55.015420 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.015432 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.015438 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.015898 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.015926 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.015938 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.015968 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.015994 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.016013 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:55.016024 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.016030 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.016048 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:55.022599 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.022612 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.022619 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.023173 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.023188 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\n+ echo\n+ curlI0825 00:53:55.023203 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.023228 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.023244 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\nI0825 00:53:55.023260 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.023270 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.023305 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.023325 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.027328 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.027349 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.027361 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.028091 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.028105 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.028112 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -sI0825 00:53:55.028164 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.028174 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.028180 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.028415 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.028429 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.028440 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.031635 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.031655 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.031667 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.031909 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.031926 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.031942 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.032065 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.032085 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.032105 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.039745 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.039758 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.039766 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.040232 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.040248 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.040266 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.040296 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.040324 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.040347 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.046038 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.046055 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.046065 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.046608 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.046626 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.046636 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.046652 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.046669 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.046690 3110 log.go:181] (0xc000a2a000) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.050078 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.050100 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.050115 3110 log.go:181] (0xc000209220) (3) Data frame sent\nI0825 00:53:55.050577 3110 log.go:181] (0xc000578d10) Data frame received for 3\nI0825 00:53:55.050594 3110 log.go:181] (0xc000209220) (3) Data frame handling\nI0825 00:53:55.050624 3110 log.go:181] (0xc000578d10) Data frame received for 5\nI0825 00:53:55.050642 3110 log.go:181] (0xc000a2a000) (5) Data frame handling\nI0825 00:53:55.052325 3110 log.go:181] (0xc000578d10) Data frame received for 1\nI0825 00:53:55.052347 3110 log.go:181] (0xc000a2a820) (1) Data frame handling\nI0825 00:53:55.052356 3110 log.go:181] (0xc000a2a820) (1) Data frame sent\nI0825 00:53:55.052380 3110 log.go:181] (0xc000578d10) (0xc000a2a820) Stream removed, broadcasting: 1\nI0825 00:53:55.052402 3110 log.go:181] (0xc000578d10) Go away received\nI0825 00:53:55.053022 3110 log.go:181] (0xc000578d10) (0xc000a2a820) Stream removed, broadcasting: 1\nI0825 00:53:55.053051 3110 log.go:181] (0xc000578d10) (0xc000209220) Stream removed, broadcasting: 3\nI0825 00:53:55.053065 3110 log.go:181] (0xc000578d10) (0xc000a2a000) Stream removed, broadcasting: 5\n" Aug 25 00:53:55.060: INFO: stdout: "\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf\naffinity-clusterip-timeout-968jf" Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Received response from host: affinity-clusterip-timeout-968jf Aug 25 00:53:55.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3432 execpod-affinityrpbc2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.146.44:80/' Aug 25 00:53:55.282: INFO: stderr: "I0825 00:53:55.197333 3128 log.go:181] (0xc00097b3f0) (0xc000972a00) Create stream\nI0825 00:53:55.197385 3128 log.go:181] (0xc00097b3f0) (0xc000972a00) Stream added, broadcasting: 1\nI0825 00:53:55.200241 3128 log.go:181] (0xc00097b3f0) Reply frame received for 1\nI0825 00:53:55.200317 3128 log.go:181] (0xc00097b3f0) (0xc000576000) Create stream\nI0825 00:53:55.200349 3128 log.go:181] (0xc00097b3f0) (0xc000576000) Stream added, broadcasting: 3\nI0825 00:53:55.201712 3128 log.go:181] (0xc00097b3f0) Reply frame received for 3\nI0825 00:53:55.201739 3128 log.go:181] (0xc00097b3f0) (0xc000ca40a0) Create stream\nI0825 00:53:55.201752 3128 log.go:181] (0xc00097b3f0) (0xc000ca40a0) Stream added, broadcasting: 5\nI0825 00:53:55.202656 3128 log.go:181] (0xc00097b3f0) Reply frame received for 5\nI0825 00:53:55.267867 3128 log.go:181] (0xc00097b3f0) Data frame received for 5\nI0825 00:53:55.267909 3128 log.go:181] (0xc000ca40a0) (5) Data frame handling\nI0825 00:53:55.267927 3128 log.go:181] (0xc000ca40a0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:53:55.270422 3128 log.go:181] (0xc00097b3f0) Data frame received for 3\nI0825 00:53:55.270449 3128 log.go:181] (0xc000576000) (3) Data frame handling\nI0825 00:53:55.270464 3128 log.go:181] (0xc000576000) (3) Data frame sent\nI0825 00:53:55.271111 3128 log.go:181] (0xc00097b3f0) Data frame received for 5\nI0825 00:53:55.271136 3128 log.go:181] (0xc000ca40a0) (5) Data frame handling\nI0825 00:53:55.271263 3128 log.go:181] (0xc00097b3f0) Data frame received for 3\nI0825 00:53:55.271366 3128 log.go:181] (0xc000576000) (3) Data frame handling\nI0825 00:53:55.273000 3128 log.go:181] (0xc00097b3f0) Data frame received for 1\nI0825 00:53:55.273022 3128 log.go:181] (0xc000972a00) (1) Data frame handling\nI0825 00:53:55.273037 3128 log.go:181] (0xc000972a00) (1) Data frame sent\nI0825 00:53:55.273056 3128 log.go:181] (0xc00097b3f0) (0xc000972a00) Stream removed, broadcasting: 1\nI0825 00:53:55.273069 3128 log.go:181] (0xc00097b3f0) Go away received\nI0825 00:53:55.273518 3128 log.go:181] (0xc00097b3f0) (0xc000972a00) Stream removed, broadcasting: 1\nI0825 00:53:55.273534 3128 log.go:181] (0xc00097b3f0) (0xc000576000) Stream removed, broadcasting: 3\nI0825 00:53:55.273542 3128 log.go:181] (0xc00097b3f0) (0xc000ca40a0) Stream removed, broadcasting: 5\n" Aug 25 00:53:55.282: INFO: stdout: "affinity-clusterip-timeout-968jf" Aug 25 00:54:10.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3432 execpod-affinityrpbc2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.146.44:80/' Aug 25 00:54:10.513: INFO: stderr: "I0825 00:54:10.420228 3146 log.go:181] (0xc00003a160) (0xc000d561e0) Create stream\nI0825 00:54:10.420297 3146 log.go:181] (0xc00003a160) (0xc000d561e0) Stream added, broadcasting: 1\nI0825 00:54:10.422324 3146 log.go:181] (0xc00003a160) Reply frame received for 1\nI0825 00:54:10.422358 3146 log.go:181] (0xc00003a160) (0xc000d56280) Create stream\nI0825 00:54:10.422368 3146 log.go:181] (0xc00003a160) (0xc000d56280) Stream added, broadcasting: 3\nI0825 00:54:10.423389 3146 log.go:181] (0xc00003a160) Reply frame received for 3\nI0825 00:54:10.423424 3146 log.go:181] (0xc00003a160) (0xc00081a280) Create stream\nI0825 00:54:10.423434 3146 log.go:181] (0xc00003a160) (0xc00081a280) Stream added, broadcasting: 5\nI0825 00:54:10.424235 3146 log.go:181] (0xc00003a160) Reply frame received for 5\nI0825 00:54:10.497072 3146 log.go:181] (0xc00003a160) Data frame received for 5\nI0825 00:54:10.497104 3146 log.go:181] (0xc00081a280) (5) Data frame handling\nI0825 00:54:10.497121 3146 log.go:181] (0xc00081a280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:54:10.501376 3146 log.go:181] (0xc00003a160) Data frame received for 3\nI0825 00:54:10.501394 3146 log.go:181] (0xc000d56280) (3) Data frame handling\nI0825 00:54:10.501405 3146 log.go:181] (0xc000d56280) (3) Data frame sent\nI0825 00:54:10.502376 3146 log.go:181] (0xc00003a160) Data frame received for 5\nI0825 00:54:10.502403 3146 log.go:181] (0xc00081a280) (5) Data frame handling\nI0825 00:54:10.502426 3146 log.go:181] (0xc00003a160) Data frame received for 3\nI0825 00:54:10.502450 3146 log.go:181] (0xc000d56280) (3) Data frame handling\nI0825 00:54:10.503830 3146 log.go:181] (0xc00003a160) Data frame received for 1\nI0825 00:54:10.503875 3146 log.go:181] (0xc000d561e0) (1) Data frame handling\nI0825 00:54:10.503908 3146 log.go:181] (0xc000d561e0) (1) Data frame sent\nI0825 00:54:10.503925 3146 log.go:181] (0xc00003a160) (0xc000d561e0) Stream removed, broadcasting: 1\nI0825 00:54:10.503938 3146 log.go:181] (0xc00003a160) Go away received\nI0825 00:54:10.504293 3146 log.go:181] (0xc00003a160) (0xc000d561e0) Stream removed, broadcasting: 1\nI0825 00:54:10.504321 3146 log.go:181] (0xc00003a160) (0xc000d56280) Stream removed, broadcasting: 3\nI0825 00:54:10.504332 3146 log.go:181] (0xc00003a160) (0xc00081a280) Stream removed, broadcasting: 5\n" Aug 25 00:54:10.513: INFO: stdout: "affinity-clusterip-timeout-968jf" Aug 25 00:54:25.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-3432 execpod-affinityrpbc2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.104.146.44:80/' Aug 25 00:54:25.769: INFO: stderr: "I0825 00:54:25.669161 3164 log.go:181] (0xc0001d1290) (0xc000b3c5a0) Create stream\nI0825 00:54:25.669213 3164 log.go:181] (0xc0001d1290) (0xc000b3c5a0) Stream added, broadcasting: 1\nI0825 00:54:25.671297 3164 log.go:181] (0xc0001d1290) Reply frame received for 1\nI0825 00:54:25.671348 3164 log.go:181] (0xc0001d1290) (0xc00062a5a0) Create stream\nI0825 00:54:25.671370 3164 log.go:181] (0xc0001d1290) (0xc00062a5a0) Stream added, broadcasting: 3\nI0825 00:54:25.672152 3164 log.go:181] (0xc0001d1290) Reply frame received for 3\nI0825 00:54:25.672189 3164 log.go:181] (0xc0001d1290) (0xc0001c8640) Create stream\nI0825 00:54:25.672212 3164 log.go:181] (0xc0001d1290) (0xc0001c8640) Stream added, broadcasting: 5\nI0825 00:54:25.673070 3164 log.go:181] (0xc0001d1290) Reply frame received for 5\nI0825 00:54:25.753097 3164 log.go:181] (0xc0001d1290) Data frame received for 5\nI0825 00:54:25.753141 3164 log.go:181] (0xc0001c8640) (5) Data frame handling\nI0825 00:54:25.753175 3164 log.go:181] (0xc0001c8640) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.104.146.44:80/\nI0825 00:54:25.757835 3164 log.go:181] (0xc0001d1290) Data frame received for 3\nI0825 00:54:25.757863 3164 log.go:181] (0xc00062a5a0) (3) Data frame handling\nI0825 00:54:25.757880 3164 log.go:181] (0xc00062a5a0) (3) Data frame sent\nI0825 00:54:25.758304 3164 log.go:181] (0xc0001d1290) Data frame received for 5\nI0825 00:54:25.758320 3164 log.go:181] (0xc0001c8640) (5) Data frame handling\nI0825 00:54:25.758786 3164 log.go:181] (0xc0001d1290) Data frame received for 3\nI0825 00:54:25.758810 3164 log.go:181] (0xc00062a5a0) (3) Data frame handling\nI0825 00:54:25.759927 3164 log.go:181] (0xc0001d1290) Data frame received for 1\nI0825 00:54:25.759948 3164 log.go:181] (0xc000b3c5a0) (1) Data frame handling\nI0825 00:54:25.759962 3164 log.go:181] (0xc000b3c5a0) (1) Data frame sent\nI0825 00:54:25.759980 3164 log.go:181] (0xc0001d1290) (0xc000b3c5a0) Stream removed, broadcasting: 1\nI0825 00:54:25.759995 3164 log.go:181] (0xc0001d1290) Go away received\nI0825 00:54:25.760365 3164 log.go:181] (0xc0001d1290) (0xc000b3c5a0) Stream removed, broadcasting: 1\nI0825 00:54:25.760393 3164 log.go:181] (0xc0001d1290) (0xc00062a5a0) Stream removed, broadcasting: 3\nI0825 00:54:25.760409 3164 log.go:181] (0xc0001d1290) (0xc0001c8640) Stream removed, broadcasting: 5\n" Aug 25 00:54:25.769: INFO: stdout: "affinity-clusterip-timeout-lrmbq" Aug 25 00:54:25.769: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3432, will wait for the garbage collector to delete the pods Aug 25 00:54:25.891: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.965114ms Aug 25 00:54:26.491: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.217253ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:54:40.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3432" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:75.887 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":252,"skipped":4194,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:54:40.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:56:41.747: INFO: Deleting pod "var-expansion-eb8d1535-05bf-4d07-b2d2-ab36ee15ecdf" in namespace "var-expansion-8094" Aug 25 00:56:41.752: INFO: Wait up to 5m0s for pod "var-expansion-eb8d1535-05bf-4d07-b2d2-ab36ee15ecdf" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:56:43.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8094" for this suite. • [SLOW TEST:122.975 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":253,"skipped":4194,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:56:43.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:56:43.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8882" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":254,"skipped":4195,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:56:43.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:57:02.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6085" for this suite. • [SLOW TEST:18.197 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":255,"skipped":4196,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:57:02.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 25 00:57:02.818: INFO: namespace kubectl-7903 Aug 25 00:57:02.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7903' Aug 25 00:57:04.433: INFO: stderr: "" Aug 25 00:57:04.433: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 25 00:57:05.490: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:57:05.491: INFO: Found 0 / 1 Aug 25 00:57:06.438: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:57:06.438: INFO: Found 0 / 1 Aug 25 00:57:07.569: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:57:07.569: INFO: Found 0 / 1 Aug 25 00:57:08.437: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:57:08.437: INFO: Found 1 / 1 Aug 25 00:57:08.437: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 25 00:57:08.440: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 00:57:08.440: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 25 00:57:08.440: INFO: wait on agnhost-primary startup in kubectl-7903 Aug 25 00:57:08.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config logs agnhost-primary-w7m66 agnhost-primary --namespace=kubectl-7903' Aug 25 00:57:08.556: INFO: stderr: "" Aug 25 00:57:08.556: INFO: stdout: "Paused\n" STEP: exposing RC Aug 25 00:57:08.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7903' Aug 25 00:57:08.819: INFO: stderr: "" Aug 25 00:57:08.819: INFO: stdout: "service/rm2 exposed\n" Aug 25 00:57:08.823: INFO: Service rm2 in namespace kubectl-7903 found. STEP: exposing service Aug 25 00:57:10.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7903' Aug 25 00:57:11.789: INFO: stderr: "" Aug 25 00:57:11.789: INFO: stdout: "service/rm3 exposed\n" Aug 25 00:57:12.337: INFO: Service rm3 in namespace kubectl-7903 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:57:14.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7903" for this suite. • [SLOW TEST:12.180 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":256,"skipped":4250,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:57:14.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 25 00:57:27.554: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:57:28.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7798" for this suite. • [SLOW TEST:14.400 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":257,"skipped":4258,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:57:28.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Aug 25 00:57:29.411: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Aug 25 00:57:29.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8203' Aug 25 00:57:31.095: INFO: stderr: "" Aug 25 00:57:31.095: INFO: stdout: "service/agnhost-replica created\n" Aug 25 00:57:31.099: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Aug 25 00:57:31.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8203' Aug 25 00:57:32.525: INFO: stderr: "" Aug 25 00:57:32.525: INFO: stdout: "service/agnhost-primary created\n" Aug 25 00:57:32.525: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 25 00:57:32.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8203' Aug 25 00:57:33.772: INFO: stderr: "" Aug 25 00:57:33.772: INFO: stdout: "service/frontend created\n" Aug 25 00:57:33.772: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 25 00:57:33.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8203' Aug 25 00:57:34.135: INFO: stderr: "" Aug 25 00:57:34.135: INFO: stdout: "deployment.apps/frontend created\n" Aug 25 00:57:34.136: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 25 00:57:34.136: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8203' Aug 25 00:57:34.552: INFO: stderr: "" Aug 25 00:57:34.552: INFO: stdout: "deployment.apps/agnhost-primary created\n" Aug 25 00:57:34.552: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 25 00:57:34.552: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8203' Aug 25 00:57:35.244: INFO: stderr: "" Aug 25 00:57:35.244: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Aug 25 00:57:35.244: INFO: Waiting for all frontend pods to be Running. Aug 25 00:57:45.294: INFO: Waiting for frontend to serve content. Aug 25 00:57:45.304: INFO: Trying to add a new entry to the guestbook. Aug 25 00:57:45.312: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 25 00:57:45.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8203' Aug 25 00:57:45.529: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:57:45.529: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Aug 25 00:57:45.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8203' Aug 25 00:57:45.770: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:57:45.770: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 25 00:57:45.770: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8203' Aug 25 00:57:45.923: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:57:45.923: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 25 00:57:45.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8203' Aug 25 00:57:46.057: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:57:46.057: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 25 00:57:46.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8203' Aug 25 00:57:46.194: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:57:46.194: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 25 00:57:46.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8203' Aug 25 00:57:46.818: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 00:57:46.818: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:57:46.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8203" for this suite. • [SLOW TEST:18.097 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":258,"skipped":4282,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:57:46.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 25 00:57:49.247: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 25 00:57:51.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 25 00:57:53.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733913869, loc:(*time.Location)(0x7712980)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 25 00:57:56.309: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:07.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3019" for this suite. STEP: Destroying namespace "webhook-3019-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.940 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":259,"skipped":4287,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:09.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 25 00:58:10.952: INFO: Waiting up to 5m0s for pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf" in namespace "downward-api-9600" to be "Succeeded or Failed" Aug 25 00:58:10.954: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493343ms Aug 25 00:58:13.023: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07129857s Aug 25 00:58:15.057: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105107872s Aug 25 00:58:17.315: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.362867433s Aug 25 00:58:19.566: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.614225385s Aug 25 00:58:21.590: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf": Phase="Running", Reason="", readiness=true. Elapsed: 10.638464419s Aug 25 00:58:23.594: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.641823239s STEP: Saw pod success Aug 25 00:58:23.594: INFO: Pod "downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf" satisfied condition "Succeeded or Failed" Aug 25 00:58:23.596: INFO: Trying to get logs from node latest-worker pod downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf container dapi-container: STEP: delete the pod Aug 25 00:58:23.684: INFO: Waiting for pod downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf to disappear Aug 25 00:58:23.703: INFO: Pod downward-api-bb0ea691-a40a-4e54-bbe9-16bcdcd755cf no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:23.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9600" for this suite. • [SLOW TEST:13.919 seconds] [sig-node] Downward API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4304,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:23.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 25 00:58:23.991: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6181 /api/v1/namespaces/watch-6181/configmaps/e2e-watch-test-watch-closed 1b46ac8a-5043-4645-a8ee-c58a76fce101 3440042 0 2020-08-25 00:58:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-25 00:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:58:23.991: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6181 /api/v1/namespaces/watch-6181/configmaps/e2e-watch-test-watch-closed 1b46ac8a-5043-4645-a8ee-c58a76fce101 3440044 0 2020-08-25 00:58:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-25 00:58:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 25 00:58:24.184: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6181 /api/v1/namespaces/watch-6181/configmaps/e2e-watch-test-watch-closed 1b46ac8a-5043-4645-a8ee-c58a76fce101 3440046 0 2020-08-25 00:58:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-25 00:58:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 00:58:24.185: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6181 /api/v1/namespaces/watch-6181/configmaps/e2e-watch-test-watch-closed 1b46ac8a-5043-4645-a8ee-c58a76fce101 3440047 0 2020-08-25 00:58:23 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-25 00:58:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:24.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6181" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":261,"skipped":4361,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:24.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 00:58:24.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4" in namespace "projected-9994" to be "Succeeded or Failed" Aug 25 00:58:24.526: INFO: Pod "downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4": Phase="Pending", Reason="", readiness=false. Elapsed: 61.532928ms Aug 25 00:58:26.768: INFO: Pod "downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.303661337s Aug 25 00:58:28.923: INFO: Pod "downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459281969s Aug 25 00:58:31.079: INFO: Pod "downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.614961553s Aug 25 00:58:33.481: INFO: Pod "downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.016833103s STEP: Saw pod success Aug 25 00:58:33.481: INFO: Pod "downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4" satisfied condition "Succeeded or Failed" Aug 25 00:58:33.484: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4 container client-container: STEP: delete the pod Aug 25 00:58:34.156: INFO: Waiting for pod downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4 to disappear Aug 25 00:58:34.540: INFO: Pod downwardapi-volume-e0b6b349-4718-4679-bd15-42e617e457a4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:34.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9994" for this suite. • [SLOW TEST:10.713 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":262,"skipped":4362,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:34.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Aug 25 00:58:37.489: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:38.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-457" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":263,"skipped":4400,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:38.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 25 00:58:39.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1410' Aug 25 00:58:39.549: INFO: stderr: "" Aug 25 00:58:39.549: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Aug 25 00:58:39.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1410' Aug 25 00:58:42.918: INFO: stderr: "" Aug 25 00:58:42.918: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:42.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1410" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":264,"skipped":4403,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:42.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-88064914-eabf-486e-8374-8ade8fd7af71 STEP: Creating a pod to test consume secrets Aug 25 00:58:43.062: INFO: Waiting up to 5m0s for pod "pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3" in namespace "secrets-284" to be "Succeeded or Failed" Aug 25 00:58:43.076: INFO: Pod "pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.701551ms Aug 25 00:58:45.147: INFO: Pod "pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085527692s Aug 25 00:58:47.151: INFO: Pod "pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3": Phase="Running", Reason="", readiness=true. Elapsed: 4.089384466s Aug 25 00:58:49.242: INFO: Pod "pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.179808912s STEP: Saw pod success Aug 25 00:58:49.242: INFO: Pod "pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3" satisfied condition "Succeeded or Failed" Aug 25 00:58:49.245: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3 container secret-volume-test: STEP: delete the pod Aug 25 00:58:50.036: INFO: Waiting for pod pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3 to disappear Aug 25 00:58:50.058: INFO: Pod pod-secrets-1193c1ea-5c87-4af3-b005-7132fcecaab3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:50.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-284" for this suite. • [SLOW TEST:7.241 seconds] [sig-storage] Secrets /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4412,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:50.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-762d1cce-6bf4-48b5-ba20-49e42fb72f56 STEP: Creating a pod to test consume configMaps Aug 25 00:58:50.334: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51" in namespace "projected-8885" to be "Succeeded or Failed" Aug 25 00:58:50.441: INFO: Pod "pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51": Phase="Pending", Reason="", readiness=false. Elapsed: 107.065169ms Aug 25 00:58:52.447: INFO: Pod "pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112380515s Aug 25 00:58:54.452: INFO: Pod "pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51": Phase="Running", Reason="", readiness=true. Elapsed: 4.118177976s Aug 25 00:58:56.462: INFO: Pod "pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.128132696s STEP: Saw pod success Aug 25 00:58:56.462: INFO: Pod "pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51" satisfied condition "Succeeded or Failed" Aug 25 00:58:56.466: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51 container projected-configmap-volume-test: STEP: delete the pod Aug 25 00:58:56.491: INFO: Waiting for pod pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51 to disappear Aug 25 00:58:56.494: INFO: Pod pod-projected-configmaps-5c676d25-1d86-4768-833f-13d374b89c51 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:58:56.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8885" for this suite. • [SLOW TEST:6.335 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4425,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:58:56.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 00:58:56.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 25 00:58:57.244: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-25T00:58:57Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-25T00:58:57Z]] name:name1 resourceVersion:3440260 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f8c33918-e6c6-4480-ab0c-7bb2677ac3d0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 25 00:59:07.251: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-25T00:59:07Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-25T00:59:07Z]] name:name2 resourceVersion:3440299 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5b729103-fab8-46d8-8800-7cac5d64641c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 25 00:59:17.320: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-25T00:58:57Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-25T00:59:17Z]] name:name1 resourceVersion:3440326 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f8c33918-e6c6-4480-ab0c-7bb2677ac3d0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 25 00:59:27.446: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-25T00:59:07Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-25T00:59:27Z]] name:name2 resourceVersion:3440353 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5b729103-fab8-46d8-8800-7cac5d64641c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 25 00:59:37.491: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-25T00:58:57Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-25T00:59:17Z]] name:name1 resourceVersion:3440381 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:f8c33918-e6c6-4480-ab0c-7bb2677ac3d0] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 25 00:59:47.567: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-25T00:59:07Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-25T00:59:27Z]] name:name2 resourceVersion:3440409 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5b729103-fab8-46d8-8800-7cac5d64641c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 00:59:58.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7588" for this suite. • [SLOW TEST:61.582 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":267,"skipped":4430,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 00:59:58.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 01:01:58.227: INFO: Deleting pod "var-expansion-2396e471-8cff-4544-8e73-d50fed3667db" in namespace "var-expansion-4695" Aug 25 01:01:58.232: INFO: Wait up to 5m0s for pod "var-expansion-2396e471-8cff-4544-8e73-d50fed3667db" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:02:02.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4695" for this suite. • [SLOW TEST:124.168 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":268,"skipped":4444,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:02:02.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-e96dd027-89bf-4c0d-8dad-9e390cd8896f STEP: Creating a pod to test consume configMaps Aug 25 01:02:02.394: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484" in namespace "configmap-8613" to be "Succeeded or Failed" Aug 25 01:02:02.418: INFO: Pod "pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484": Phase="Pending", Reason="", readiness=false. Elapsed: 24.598792ms Aug 25 01:02:04.634: INFO: Pod "pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239758725s Aug 25 01:02:06.637: INFO: Pod "pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243557403s Aug 25 01:02:08.649: INFO: Pod "pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.255276073s STEP: Saw pod success Aug 25 01:02:08.649: INFO: Pod "pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484" satisfied condition "Succeeded or Failed" Aug 25 01:02:08.651: INFO: Trying to get logs from node latest-worker pod pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484 container configmap-volume-test: STEP: delete the pod Aug 25 01:02:08.713: INFO: Waiting for pod pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484 to disappear Aug 25 01:02:08.726: INFO: Pod pod-configmaps-1b572b31-3824-4fc1-9c51-19d0ba014484 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:02:08.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8613" for this suite. • [SLOW TEST:6.478 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:02:08.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 01:02:08.920: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Pending, waiting for it to be Running (with Ready = true) Aug 25 01:02:11.058: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Pending, waiting for it to be Running (with Ready = true) Aug 25 01:02:12.925: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:14.925: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:16.925: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:18.924: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:21.185: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:22.956: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:24.925: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:26.999: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:28.924: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:31.041: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:32.924: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:34.926: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = false) Aug 25 01:02:36.924: INFO: The status of Pod test-webserver-671bcbb1-1174-4c53-9d8e-e03ce1676771 is Running (Ready = true) Aug 25 01:02:36.927: INFO: Container started at 2020-08-25 01:02:12 +0000 UTC, pod became ready at 2020-08-25 01:02:36 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:02:36.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5672" for this suite. • [SLOW TEST:28.205 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4459,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:02:36.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Aug 25 01:02:37.021: INFO: Waiting up to 5m0s for pod "var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30" in namespace "var-expansion-250" to be "Succeeded or Failed" Aug 25 01:02:37.024: INFO: Pod "var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30": Phase="Pending", Reason="", readiness=false. Elapsed: 3.20522ms Aug 25 01:02:39.029: INFO: Pod "var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008012561s Aug 25 01:02:41.131: INFO: Pod "var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30": Phase="Running", Reason="", readiness=true. Elapsed: 4.109971723s Aug 25 01:02:43.135: INFO: Pod "var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114487818s STEP: Saw pod success Aug 25 01:02:43.135: INFO: Pod "var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30" satisfied condition "Succeeded or Failed" Aug 25 01:02:43.138: INFO: Trying to get logs from node latest-worker2 pod var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30 container dapi-container: STEP: delete the pod Aug 25 01:02:43.216: INFO: Waiting for pod var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30 to disappear Aug 25 01:02:43.271: INFO: Pod var-expansion-a8118d83-1c72-4a08-b259-8f2a32d67d30 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:02:43.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-250" for this suite. • [SLOW TEST:6.342 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4462,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:02:43.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:03:04.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-415" for this suite. STEP: Destroying namespace "nsdeletetest-5615" for this suite. Aug 25 01:03:05.041: INFO: Namespace nsdeletetest-5615 was already deleted STEP: Destroying namespace "nsdeletetest-7265" for this suite. • [SLOW TEST:21.765 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":272,"skipped":4474,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:03:05.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-ab2160e0-45c6-4ae7-8be5-1bc66ca83e7c STEP: Creating a pod to test consume configMaps Aug 25 01:03:06.383: INFO: Waiting up to 5m0s for pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f" in namespace "configmap-3482" to be "Succeeded or Failed" Aug 25 01:03:06.868: INFO: Pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f": Phase="Pending", Reason="", readiness=false. Elapsed: 485.669334ms Aug 25 01:03:08.872: INFO: Pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489419966s Aug 25 01:03:10.958: INFO: Pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575725335s Aug 25 01:03:13.032: INFO: Pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64919592s Aug 25 01:03:15.109: INFO: Pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.726082051s Aug 25 01:03:17.715: INFO: Pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.332103696s STEP: Saw pod success Aug 25 01:03:17.715: INFO: Pod "pod-configmaps-84744776-1d38-414e-9014-890d3303900f" satisfied condition "Succeeded or Failed" Aug 25 01:03:17.719: INFO: Trying to get logs from node latest-worker pod pod-configmaps-84744776-1d38-414e-9014-890d3303900f container configmap-volume-test: STEP: delete the pod Aug 25 01:03:18.563: INFO: Waiting for pod pod-configmaps-84744776-1d38-414e-9014-890d3303900f to disappear Aug 25 01:03:19.114: INFO: Pod pod-configmaps-84744776-1d38-414e-9014-890d3303900f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:03:19.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3482" for this suite. • [SLOW TEST:14.309 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":273,"skipped":4509,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:03:19.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:03:31.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5337" for this suite. • [SLOW TEST:12.377 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":274,"skipped":4511,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:03:31.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:03:48.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6856" for this suite. • [SLOW TEST:16.360 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":275,"skipped":4543,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:03:48.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:03:49.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6797" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":276,"skipped":4544,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:03:49.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 25 01:03:49.524: INFO: Waiting up to 5m0s for pod "pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97" in namespace "emptydir-6372" to be "Succeeded or Failed" Aug 25 01:03:49.602: INFO: Pod "pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97": Phase="Pending", Reason="", readiness=false. Elapsed: 77.404663ms Aug 25 01:03:51.653: INFO: Pod "pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129263085s Aug 25 01:03:53.934: INFO: Pod "pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410205064s Aug 25 01:03:56.127: INFO: Pod "pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602906315s Aug 25 01:03:58.355: INFO: Pod "pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.830437857s STEP: Saw pod success Aug 25 01:03:58.355: INFO: Pod "pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97" satisfied condition "Succeeded or Failed" Aug 25 01:03:58.357: INFO: Trying to get logs from node latest-worker pod pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97 container test-container: STEP: delete the pod Aug 25 01:03:58.577: INFO: Waiting for pod pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97 to disappear Aug 25 01:03:58.652: INFO: Pod pod-eefd72d7-aab4-47a8-bf14-1a41be1e4c97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:03:58.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6372" for this suite. • [SLOW TEST:9.743 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:03:59.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Aug 25 01:03:59.350: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7507' Aug 25 01:04:06.205: INFO: stderr: "" Aug 25 01:04:06.205: INFO: stdout: "pod/pause created\n" Aug 25 01:04:06.205: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 25 01:04:06.206: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7507" to be "running and ready" Aug 25 01:04:06.245: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 39.939131ms Aug 25 01:04:08.250: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044333918s Aug 25 01:04:10.254: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048460057s Aug 25 01:04:12.258: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.052632077s Aug 25 01:04:12.258: INFO: Pod "pause" satisfied condition "running and ready" Aug 25 01:04:12.258: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Aug 25 01:04:12.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7507' Aug 25 01:04:12.381: INFO: stderr: "" Aug 25 01:04:12.381: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 25 01:04:12.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7507' Aug 25 01:04:12.477: INFO: stderr: "" Aug 25 01:04:12.477: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 25 01:04:12.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7507' Aug 25 01:04:12.586: INFO: stderr: "" Aug 25 01:04:12.586: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 25 01:04:12.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7507' Aug 25 01:04:12.694: INFO: stderr: "" Aug 25 01:04:12.694: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Aug 25 01:04:12.694: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7507' Aug 25 01:04:12.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 25 01:04:12.825: INFO: stdout: "pod \"pause\" force deleted\n" Aug 25 01:04:12.825: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7507' Aug 25 01:04:12.930: INFO: stderr: "No resources found in kubectl-7507 namespace.\n" Aug 25 01:04:12.930: INFO: stdout: "" Aug 25 01:04:12.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7507 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 25 01:04:13.028: INFO: stderr: "" Aug 25 01:04:13.028: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:04:13.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7507" for this suite. • [SLOW TEST:13.894 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":278,"skipped":4574,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:04:13.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-bfbf75b6-ae09-40bd-a09c-3f80d84672eb in namespace container-probe-1519 Aug 25 01:04:18.337: INFO: Started pod liveness-bfbf75b6-ae09-40bd-a09c-3f80d84672eb in namespace container-probe-1519 STEP: checking the pod's current state and verifying that restartCount is present Aug 25 01:04:18.343: INFO: Initial restart count of pod liveness-bfbf75b6-ae09-40bd-a09c-3f80d84672eb is 0 Aug 25 01:04:49.244: INFO: Restart count of pod container-probe-1519/liveness-bfbf75b6-ae09-40bd-a09c-3f80d84672eb is now 1 (30.901166876s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:04:49.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1519" for this suite. • [SLOW TEST:36.961 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":279,"skipped":4575,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:04:49.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 01:04:50.702: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:04:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9149" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":280,"skipped":4604,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:04:54.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Aug 25 01:06:57.079: INFO: Successfully updated pod "var-expansion-2a60da69-b78e-4897-93a8-8ac62b0e54dd" STEP: waiting for pod running STEP: deleting the pod gracefully Aug 25 01:07:01.183: INFO: Deleting pod "var-expansion-2a60da69-b78e-4897-93a8-8ac62b0e54dd" in namespace "var-expansion-8806" Aug 25 01:07:01.219: INFO: Wait up to 5m0s for pod "var-expansion-2a60da69-b78e-4897-93a8-8ac62b0e54dd" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:07:41.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8806" for this suite. • [SLOW TEST:167.257 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":281,"skipped":4611,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:07:41.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2183 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 25 01:07:43.394: INFO: Found 0 stateful pods, waiting for 3 Aug 25 01:07:53.398: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 25 01:07:53.398: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 25 01:07:53.398: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 25 01:08:03.879: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 25 01:08:03.879: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 25 01:08:03.879: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 25 01:08:04.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2183 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 25 01:08:05.080: INFO: stderr: "I0825 01:08:04.573120 3644 log.go:181] (0xc000146370) (0xc00088c460) Create stream\nI0825 01:08:04.573177 3644 log.go:181] (0xc000146370) (0xc00088c460) Stream added, broadcasting: 1\nI0825 01:08:04.575042 3644 log.go:181] (0xc000146370) Reply frame received for 1\nI0825 01:08:04.575087 3644 log.go:181] (0xc000146370) (0xc000c45540) Create stream\nI0825 01:08:04.575108 3644 log.go:181] (0xc000146370) (0xc000c45540) Stream added, broadcasting: 3\nI0825 01:08:04.575825 3644 log.go:181] (0xc000146370) Reply frame received for 3\nI0825 01:08:04.575869 3644 log.go:181] (0xc000146370) (0xc000150aa0) Create stream\nI0825 01:08:04.575890 3644 log.go:181] (0xc000146370) (0xc000150aa0) Stream added, broadcasting: 5\nI0825 01:08:04.576592 3644 log.go:181] (0xc000146370) Reply frame received for 5\nI0825 01:08:04.633270 3644 log.go:181] (0xc000146370) Data frame received for 5\nI0825 01:08:04.633290 3644 log.go:181] (0xc000150aa0) (5) Data frame handling\nI0825 01:08:04.633303 3644 log.go:181] (0xc000150aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0825 01:08:05.069446 3644 log.go:181] (0xc000146370) Data frame received for 5\nI0825 01:08:05.069473 3644 log.go:181] (0xc000150aa0) (5) Data frame handling\nI0825 01:08:05.069529 3644 log.go:181] (0xc000146370) Data frame received for 3\nI0825 01:08:05.069554 3644 log.go:181] (0xc000c45540) (3) Data frame handling\nI0825 01:08:05.069569 3644 log.go:181] (0xc000c45540) (3) Data frame sent\nI0825 01:08:05.069577 3644 log.go:181] (0xc000146370) Data frame received for 3\nI0825 01:08:05.069583 3644 log.go:181] (0xc000c45540) (3) Data frame handling\nI0825 01:08:05.070618 3644 log.go:181] (0xc000146370) Data frame received for 1\nI0825 01:08:05.070629 3644 log.go:181] (0xc00088c460) (1) Data frame handling\nI0825 01:08:05.070642 3644 log.go:181] (0xc00088c460) (1) Data frame sent\nI0825 01:08:05.070650 3644 log.go:181] (0xc000146370) (0xc00088c460) Stream removed, broadcasting: 1\nI0825 01:08:05.070660 3644 log.go:181] (0xc000146370) Go away received\nI0825 01:08:05.071084 3644 log.go:181] (0xc000146370) (0xc00088c460) Stream removed, broadcasting: 1\nI0825 01:08:05.071102 3644 log.go:181] (0xc000146370) (0xc000c45540) Stream removed, broadcasting: 3\nI0825 01:08:05.071112 3644 log.go:181] (0xc000146370) (0xc000150aa0) Stream removed, broadcasting: 5\n" Aug 25 01:08:05.080: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 25 01:08:05.080: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 25 01:08:06.388: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 25 01:08:17.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2183 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 01:08:17.644: INFO: stderr: "I0825 01:08:17.466035 3662 log.go:181] (0xc000210e70) (0xc000c25220) Create stream\nI0825 01:08:17.466115 3662 log.go:181] (0xc000210e70) (0xc000c25220) Stream added, broadcasting: 1\nI0825 01:08:17.468044 3662 log.go:181] (0xc000210e70) Reply frame received for 1\nI0825 01:08:17.468097 3662 log.go:181] (0xc000210e70) (0xc0004d43c0) Create stream\nI0825 01:08:17.468117 3662 log.go:181] (0xc000210e70) (0xc0004d43c0) Stream added, broadcasting: 3\nI0825 01:08:17.468850 3662 log.go:181] (0xc000210e70) Reply frame received for 3\nI0825 01:08:17.468877 3662 log.go:181] (0xc000210e70) (0xc000c252c0) Create stream\nI0825 01:08:17.468883 3662 log.go:181] (0xc000210e70) (0xc000c252c0) Stream added, broadcasting: 5\nI0825 01:08:17.469606 3662 log.go:181] (0xc000210e70) Reply frame received for 5\nI0825 01:08:17.523273 3662 log.go:181] (0xc000210e70) Data frame received for 5\nI0825 01:08:17.523320 3662 log.go:181] (0xc000c252c0) (5) Data frame handling\nI0825 01:08:17.523355 3662 log.go:181] (0xc000c252c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0825 01:08:17.630807 3662 log.go:181] (0xc000210e70) Data frame received for 3\nI0825 01:08:17.630837 3662 log.go:181] (0xc0004d43c0) (3) Data frame handling\nI0825 01:08:17.630850 3662 log.go:181] (0xc0004d43c0) (3) Data frame sent\nI0825 01:08:17.631088 3662 log.go:181] (0xc000210e70) Data frame received for 5\nI0825 01:08:17.631102 3662 log.go:181] (0xc000c252c0) (5) Data frame handling\nI0825 01:08:17.631117 3662 log.go:181] (0xc000210e70) Data frame received for 3\nI0825 01:08:17.631123 3662 log.go:181] (0xc0004d43c0) (3) Data frame handling\nI0825 01:08:17.632356 3662 log.go:181] (0xc000210e70) Data frame received for 1\nI0825 01:08:17.632374 3662 log.go:181] (0xc000c25220) (1) Data frame handling\nI0825 01:08:17.632385 3662 log.go:181] (0xc000c25220) (1) Data frame sent\nI0825 01:08:17.632398 3662 log.go:181] (0xc000210e70) (0xc000c25220) Stream removed, broadcasting: 1\nI0825 01:08:17.632412 3662 log.go:181] (0xc000210e70) Go away received\nI0825 01:08:17.632808 3662 log.go:181] (0xc000210e70) (0xc000c25220) Stream removed, broadcasting: 1\nI0825 01:08:17.632835 3662 log.go:181] (0xc000210e70) (0xc0004d43c0) Stream removed, broadcasting: 3\nI0825 01:08:17.632844 3662 log.go:181] (0xc000210e70) (0xc000c252c0) Stream removed, broadcasting: 5\n" Aug 25 01:08:17.645: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 25 01:08:17.645: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 25 01:08:27.873: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:08:27.873: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:08:27.873: INFO: Waiting for Pod statefulset-2183/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:08:27.873: INFO: Waiting for Pod statefulset-2183/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:08:38.474: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:08:38.474: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:08:38.474: INFO: Waiting for Pod statefulset-2183/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:08:47.879: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:08:47.879: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:08:47.879: INFO: Waiting for Pod statefulset-2183/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:08:57.881: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:08:57.881: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:09:07.879: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:09:07.879: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 25 01:09:17.879: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update STEP: Rolling back to a previous revision Aug 25 01:09:27.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2183 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 25 01:09:28.450: INFO: stderr: "I0825 01:09:28.004319 3680 log.go:181] (0xc0004f74a0) (0xc000468960) Create stream\nI0825 01:09:28.004370 3680 log.go:181] (0xc0004f74a0) (0xc000468960) Stream added, broadcasting: 1\nI0825 01:09:28.008342 3680 log.go:181] (0xc0004f74a0) Reply frame received for 1\nI0825 01:09:28.008371 3680 log.go:181] (0xc0004f74a0) (0xc0007ba3c0) Create stream\nI0825 01:09:28.008391 3680 log.go:181] (0xc0004f74a0) (0xc0007ba3c0) Stream added, broadcasting: 3\nI0825 01:09:28.009486 3680 log.go:181] (0xc0004f74a0) Reply frame received for 3\nI0825 01:09:28.009506 3680 log.go:181] (0xc0004f74a0) (0xc000c84000) Create stream\nI0825 01:09:28.009512 3680 log.go:181] (0xc0004f74a0) (0xc000c84000) Stream added, broadcasting: 5\nI0825 01:09:28.011105 3680 log.go:181] (0xc0004f74a0) Reply frame received for 5\nI0825 01:09:28.090315 3680 log.go:181] (0xc0004f74a0) Data frame received for 5\nI0825 01:09:28.090352 3680 log.go:181] (0xc000c84000) (5) Data frame handling\nI0825 01:09:28.090375 3680 log.go:181] (0xc000c84000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0825 01:09:28.437137 3680 log.go:181] (0xc0004f74a0) Data frame received for 5\nI0825 01:09:28.437178 3680 log.go:181] (0xc000c84000) (5) Data frame handling\nI0825 01:09:28.437209 3680 log.go:181] (0xc0004f74a0) Data frame received for 3\nI0825 01:09:28.437232 3680 log.go:181] (0xc0007ba3c0) (3) Data frame handling\nI0825 01:09:28.437265 3680 log.go:181] (0xc0007ba3c0) (3) Data frame sent\nI0825 01:09:28.437274 3680 log.go:181] (0xc0004f74a0) Data frame received for 3\nI0825 01:09:28.437280 3680 log.go:181] (0xc0007ba3c0) (3) Data frame handling\nI0825 01:09:28.439160 3680 log.go:181] (0xc0004f74a0) Data frame received for 1\nI0825 01:09:28.439187 3680 log.go:181] (0xc000468960) (1) Data frame handling\nI0825 01:09:28.439202 3680 log.go:181] (0xc000468960) (1) Data frame sent\nI0825 01:09:28.439220 3680 log.go:181] (0xc0004f74a0) (0xc000468960) Stream removed, broadcasting: 1\nI0825 01:09:28.439236 3680 log.go:181] (0xc0004f74a0) Go away received\nI0825 01:09:28.439577 3680 log.go:181] (0xc0004f74a0) (0xc000468960) Stream removed, broadcasting: 1\nI0825 01:09:28.439590 3680 log.go:181] (0xc0004f74a0) (0xc0007ba3c0) Stream removed, broadcasting: 3\nI0825 01:09:28.439595 3680 log.go:181] (0xc0004f74a0) (0xc000c84000) Stream removed, broadcasting: 5\n" Aug 25 01:09:28.450: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 25 01:09:28.450: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 25 01:09:38.654: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 25 01:09:49.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2183 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 25 01:09:50.257: INFO: stderr: "I0825 01:09:50.165654 3698 log.go:181] (0xc0001440b0) (0xc00013c280) Create stream\nI0825 01:09:50.165713 3698 log.go:181] (0xc0001440b0) (0xc00013c280) Stream added, broadcasting: 1\nI0825 01:09:50.167680 3698 log.go:181] (0xc0001440b0) Reply frame received for 1\nI0825 01:09:50.167729 3698 log.go:181] (0xc0001440b0) (0xc00013c320) Create stream\nI0825 01:09:50.167740 3698 log.go:181] (0xc0001440b0) (0xc00013c320) Stream added, broadcasting: 3\nI0825 01:09:50.168701 3698 log.go:181] (0xc0001440b0) Reply frame received for 3\nI0825 01:09:50.168813 3698 log.go:181] (0xc0001440b0) (0xc00013c3c0) Create stream\nI0825 01:09:50.168828 3698 log.go:181] (0xc0001440b0) (0xc00013c3c0) Stream added, broadcasting: 5\nI0825 01:09:50.169576 3698 log.go:181] (0xc0001440b0) Reply frame received for 5\nI0825 01:09:50.227551 3698 log.go:181] (0xc0001440b0) Data frame received for 5\nI0825 01:09:50.227588 3698 log.go:181] (0xc00013c3c0) (5) Data frame handling\nI0825 01:09:50.227611 3698 log.go:181] (0xc00013c3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0825 01:09:50.243221 3698 log.go:181] (0xc0001440b0) Data frame received for 3\nI0825 01:09:50.243252 3698 log.go:181] (0xc00013c320) (3) Data frame handling\nI0825 01:09:50.243270 3698 log.go:181] (0xc00013c320) (3) Data frame sent\nI0825 01:09:50.243275 3698 log.go:181] (0xc0001440b0) Data frame received for 3\nI0825 01:09:50.243279 3698 log.go:181] (0xc00013c320) (3) Data frame handling\nI0825 01:09:50.243332 3698 log.go:181] (0xc0001440b0) Data frame received for 5\nI0825 01:09:50.243361 3698 log.go:181] (0xc00013c3c0) (5) Data frame handling\nI0825 01:09:50.245724 3698 log.go:181] (0xc0001440b0) Data frame received for 1\nI0825 01:09:50.246064 3698 log.go:181] (0xc00013c280) (1) Data frame handling\nI0825 01:09:50.246101 3698 log.go:181] (0xc00013c280) (1) Data frame sent\nI0825 01:09:50.246138 3698 log.go:181] (0xc0001440b0) (0xc00013c280) Stream removed, broadcasting: 1\nI0825 01:09:50.246164 3698 log.go:181] (0xc0001440b0) Go away received\nI0825 01:09:50.246814 3698 log.go:181] (0xc0001440b0) (0xc00013c280) Stream removed, broadcasting: 1\nI0825 01:09:50.246846 3698 log.go:181] (0xc0001440b0) (0xc00013c320) Stream removed, broadcasting: 3\nI0825 01:09:50.246859 3698 log.go:181] (0xc0001440b0) (0xc00013c3c0) Stream removed, broadcasting: 5\n" Aug 25 01:09:50.257: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 25 01:09:50.257: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 25 01:10:00.277: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:10:00.277: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 25 01:10:00.277: INFO: Waiting for Pod statefulset-2183/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 25 01:10:10.412: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:10:10.412: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 25 01:10:10.412: INFO: Waiting for Pod statefulset-2183/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 25 01:10:20.294: INFO: Waiting for StatefulSet statefulset-2183/ss2 to complete update Aug 25 01:10:20.294: INFO: Waiting for Pod statefulset-2183/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 25 01:10:40.285: INFO: Deleting all statefulset in ns statefulset-2183 Aug 25 01:10:40.288: INFO: Scaling statefulset ss2 to 0 Aug 25 01:11:00.312: INFO: Waiting for statefulset status.replicas updated to 0 Aug 25 01:11:00.316: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:11:00.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2183" for this suite. • [SLOW TEST:198.747 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":282,"skipped":4669,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:11:00.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4703.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4703.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4703.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 25 01:11:10.526: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.531: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.537: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.543: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.628: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.694: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.717: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.738: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:10.917: INFO: Lookups using dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local] Aug 25 01:11:15.923: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.927: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.931: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.934: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.943: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.946: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.949: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.952: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:15.957: INFO: Lookups using dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local] Aug 25 01:11:20.921: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.923: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.925: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.927: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.934: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.936: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.938: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.941: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:20.945: INFO: Lookups using dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local] Aug 25 01:11:25.926: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.930: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.934: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.937: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.945: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.947: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.950: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.953: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:25.960: INFO: Lookups using dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local] Aug 25 01:11:30.968: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:30.971: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:30.974: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:31.017: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:31.261: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:31.269: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:31.272: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:31.275: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:31.279: INFO: Lookups using dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local] Aug 25 01:11:35.922: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.926: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.929: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.931: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.939: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.942: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.944: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.947: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local from pod dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4: the server could not find the requested resource (get pods dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4) Aug 25 01:11:35.953: INFO: Lookups using dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4703.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4703.svc.cluster.local jessie_udp@dns-test-service-2.dns-4703.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4703.svc.cluster.local] Aug 25 01:11:40.953: INFO: DNS probes using dns-4703/dns-test-b2ff136e-7521-421a-a9a6-ccea8a8453a4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:11:41.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4703" for this suite. • [SLOW TEST:41.268 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":283,"skipped":4670,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:11:41.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:11:42.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-270" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":284,"skipped":4681,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:11:42.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-636c8aa9-675b-47e4-83aa-165a90fc971a STEP: Creating a pod to test consume configMaps Aug 25 01:11:43.148: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25" in namespace "projected-5158" to be "Succeeded or Failed" Aug 25 01:11:43.573: INFO: Pod "pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25": Phase="Pending", Reason="", readiness=false. Elapsed: 424.820734ms Aug 25 01:11:45.846: INFO: Pod "pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.697753979s Aug 25 01:11:48.111: INFO: Pod "pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.963321992s Aug 25 01:11:50.125: INFO: Pod "pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.976923106s Aug 25 01:11:52.146: INFO: Pod "pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.997373702s STEP: Saw pod success Aug 25 01:11:52.146: INFO: Pod "pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25" satisfied condition "Succeeded or Failed" Aug 25 01:11:52.148: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25 container projected-configmap-volume-test: STEP: delete the pod Aug 25 01:11:52.202: INFO: Waiting for pod pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25 to disappear Aug 25 01:11:52.337: INFO: Pod pod-projected-configmaps-cbf229ae-7f3c-4777-b8e1-77530f838a25 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:11:52.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5158" for this suite. • [SLOW TEST:10.140 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4711,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:11:52.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1343 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 25 01:11:52.570: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 25 01:11:52.710: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 01:11:54.714: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 01:11:56.715: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 01:11:58.714: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:12:00.745: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:12:02.714: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:12:04.744: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:12:06.713: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 25 01:12:06.720: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 25 01:12:08.724: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 25 01:12:10.725: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 25 01:12:13.302: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 25 01:12:19.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostname&protocol=udp&host=10.244.2.199&port=8081&tries=1'] Namespace:pod-network-test-1343 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 25 01:12:19.559: INFO: >>> kubeConfig: /root/.kube/config I0825 01:12:19.591895 7 log.go:181] (0xc000851760) (0xc002cb19a0) Create stream I0825 01:12:19.591933 7 log.go:181] (0xc000851760) (0xc002cb19a0) Stream added, broadcasting: 1 I0825 01:12:19.593663 7 log.go:181] (0xc000851760) Reply frame received for 1 I0825 01:12:19.593706 7 log.go:181] (0xc000851760) (0xc002cb1a40) Create stream I0825 01:12:19.593716 7 log.go:181] (0xc000851760) (0xc002cb1a40) Stream added, broadcasting: 3 I0825 01:12:19.595034 7 log.go:181] (0xc000851760) Reply frame received for 3 I0825 01:12:19.595064 7 log.go:181] (0xc000851760) (0xc000862500) Create stream I0825 01:12:19.595072 7 log.go:181] (0xc000851760) (0xc000862500) Stream added, broadcasting: 5 I0825 01:12:19.597783 7 log.go:181] (0xc000851760) Reply frame received for 5 I0825 01:12:19.668533 7 log.go:181] (0xc000851760) Data frame received for 3 I0825 01:12:19.668556 7 log.go:181] (0xc002cb1a40) (3) Data frame handling I0825 01:12:19.668571 7 log.go:181] (0xc002cb1a40) (3) Data frame sent I0825 01:12:19.672926 7 log.go:181] (0xc000851760) Data frame received for 5 I0825 01:12:19.672957 7 log.go:181] (0xc000862500) (5) Data frame handling I0825 01:12:19.673022 7 log.go:181] (0xc000851760) Data frame received for 3 I0825 01:12:19.673047 7 log.go:181] (0xc002cb1a40) (3) Data frame handling I0825 01:12:19.674380 7 log.go:181] (0xc000851760) Data frame received for 1 I0825 01:12:19.674397 7 log.go:181] (0xc002cb19a0) (1) Data frame handling I0825 01:12:19.674405 7 log.go:181] (0xc002cb19a0) (1) Data frame sent I0825 01:12:19.674413 7 log.go:181] (0xc000851760) (0xc002cb19a0) Stream removed, broadcasting: 1 I0825 01:12:19.674449 7 log.go:181] (0xc000851760) Go away received I0825 01:12:19.674534 7 log.go:181] (0xc000851760) (0xc002cb19a0) Stream removed, broadcasting: 1 I0825 01:12:19.674555 7 log.go:181] (0xc000851760) (0xc002cb1a40) Stream removed, broadcasting: 3 I0825 01:12:19.674566 7 log.go:181] (0xc000851760) (0xc000862500) Stream removed, broadcasting: 5 Aug 25 01:12:19.674: INFO: Waiting for responses: map[] Aug 25 01:12:19.677: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.50:8080/dial?request=hostname&protocol=udp&host=10.244.1.49&port=8081&tries=1'] Namespace:pod-network-test-1343 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 25 01:12:19.677: INFO: >>> kubeConfig: /root/.kube/config I0825 01:12:19.708557 7 log.go:181] (0xc006ffc580) (0xc00148b9a0) Create stream I0825 01:12:19.708608 7 log.go:181] (0xc006ffc580) (0xc00148b9a0) Stream added, broadcasting: 1 I0825 01:12:19.716971 7 log.go:181] (0xc006ffc580) Reply frame received for 1 I0825 01:12:19.717017 7 log.go:181] (0xc006ffc580) (0xc00148ba40) Create stream I0825 01:12:19.717029 7 log.go:181] (0xc006ffc580) (0xc00148ba40) Stream added, broadcasting: 3 I0825 01:12:19.718036 7 log.go:181] (0xc006ffc580) Reply frame received for 3 I0825 01:12:19.718074 7 log.go:181] (0xc006ffc580) (0xc00053b860) Create stream I0825 01:12:19.718093 7 log.go:181] (0xc006ffc580) (0xc00053b860) Stream added, broadcasting: 5 I0825 01:12:19.718855 7 log.go:181] (0xc006ffc580) Reply frame received for 5 I0825 01:12:19.785502 7 log.go:181] (0xc006ffc580) Data frame received for 3 I0825 01:12:19.785535 7 log.go:181] (0xc00148ba40) (3) Data frame handling I0825 01:12:19.785561 7 log.go:181] (0xc00148ba40) (3) Data frame sent I0825 01:12:19.786113 7 log.go:181] (0xc006ffc580) Data frame received for 3 I0825 01:12:19.786145 7 log.go:181] (0xc00148ba40) (3) Data frame handling I0825 01:12:19.786263 7 log.go:181] (0xc006ffc580) Data frame received for 5 I0825 01:12:19.786289 7 log.go:181] (0xc00053b860) (5) Data frame handling I0825 01:12:19.787636 7 log.go:181] (0xc006ffc580) Data frame received for 1 I0825 01:12:19.787682 7 log.go:181] (0xc00148b9a0) (1) Data frame handling I0825 01:12:19.787701 7 log.go:181] (0xc00148b9a0) (1) Data frame sent I0825 01:12:19.787716 7 log.go:181] (0xc006ffc580) (0xc00148b9a0) Stream removed, broadcasting: 1 I0825 01:12:19.787743 7 log.go:181] (0xc006ffc580) Go away received I0825 01:12:19.787837 7 log.go:181] (0xc006ffc580) (0xc00148b9a0) Stream removed, broadcasting: 1 I0825 01:12:19.787858 7 log.go:181] (0xc006ffc580) (0xc00148ba40) Stream removed, broadcasting: 3 I0825 01:12:19.787872 7 log.go:181] (0xc006ffc580) (0xc00053b860) Stream removed, broadcasting: 5 Aug 25 01:12:19.787: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:12:19.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1343" for this suite. • [SLOW TEST:28.473 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4722,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:12:20.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 25 01:12:30.057: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:12:30.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9545" for this suite. • [SLOW TEST:10.201 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":287,"skipped":4724,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:12:31.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 25 01:12:31.432: INFO: Waiting up to 5m0s for pod "pod-bf064af4-5860-4f9d-b377-c8805e311385" in namespace "emptydir-2416" to be "Succeeded or Failed" Aug 25 01:12:31.748: INFO: Pod "pod-bf064af4-5860-4f9d-b377-c8805e311385": Phase="Pending", Reason="", readiness=false. Elapsed: 315.470605ms Aug 25 01:12:33.752: INFO: Pod "pod-bf064af4-5860-4f9d-b377-c8805e311385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319948424s Aug 25 01:12:36.280: INFO: Pod "pod-bf064af4-5860-4f9d-b377-c8805e311385": Phase="Running", Reason="", readiness=true. Elapsed: 4.847868967s Aug 25 01:12:38.285: INFO: Pod "pod-bf064af4-5860-4f9d-b377-c8805e311385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.852591453s STEP: Saw pod success Aug 25 01:12:38.285: INFO: Pod "pod-bf064af4-5860-4f9d-b377-c8805e311385" satisfied condition "Succeeded or Failed" Aug 25 01:12:38.288: INFO: Trying to get logs from node latest-worker pod pod-bf064af4-5860-4f9d-b377-c8805e311385 container test-container: STEP: delete the pod Aug 25 01:12:38.344: INFO: Waiting for pod pod-bf064af4-5860-4f9d-b377-c8805e311385 to disappear Aug 25 01:12:38.358: INFO: Pod pod-bf064af4-5860-4f9d-b377-c8805e311385 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:12:38.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2416" for this suite. • [SLOW TEST:7.348 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4738,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:12:38.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-739.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-739.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-739.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-739.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-739.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-739.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 25 01:12:46.680: INFO: DNS probes using dns-739/dns-test-044ff59e-adf1-4e4f-beeb-34f2eaaef832 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:12:47.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-739" for this suite. • [SLOW TEST:9.027 seconds] [sig-network] DNS /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":289,"skipped":4754,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:12:47.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 01:12:47.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6270' Aug 25 01:12:48.364: INFO: stderr: "" Aug 25 01:12:48.365: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Aug 25 01:12:48.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6270' Aug 25 01:12:48.671: INFO: stderr: "" Aug 25 01:12:48.671: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 25 01:12:49.676: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 01:12:49.676: INFO: Found 0 / 1 Aug 25 01:12:50.677: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 01:12:50.677: INFO: Found 0 / 1 Aug 25 01:12:51.675: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 01:12:51.675: INFO: Found 0 / 1 Aug 25 01:12:52.690: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 01:12:52.690: INFO: Found 0 / 1 Aug 25 01:12:53.676: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 01:12:53.676: INFO: Found 1 / 1 Aug 25 01:12:53.676: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 25 01:12:53.679: INFO: Selector matched 1 pods for map[app:agnhost] Aug 25 01:12:53.679: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 25 01:12:53.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe pod agnhost-primary-lrhw7 --namespace=kubectl-6270' Aug 25 01:12:53.815: INFO: stderr: "" Aug 25 01:12:53.815: INFO: stdout: "Name: agnhost-primary-lrhw7\nNamespace: kubectl-6270\nPriority: 0\nNode: latest-worker/172.18.0.11\nStart Time: Tue, 25 Aug 2020 01:12:48 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.202\nIPs:\n IP: 10.244.2.202\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://783cd2e37d2168cba3831e3d84a8d091893f44c72fc55a3bcb6a7fd28d29016a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 25 Aug 2020 01:12:52 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-j247v (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-j247v:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-j247v\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s Successfully assigned kubectl-6270/agnhost-primary-lrhw7 to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-primary\n Normal Started 1s kubelet, latest-worker Started container agnhost-primary\n" Aug 25 01:12:53.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-6270' Aug 25 01:12:54.144: INFO: stderr: "" Aug 25 01:12:54.144: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6270\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-primary-lrhw7\n" Aug 25 01:12:54.145: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-6270' Aug 25 01:12:54.356: INFO: stderr: "" Aug 25 01:12:54.356: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-6270\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.111.236.228\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.202:6379\nSession Affinity: None\nEvents: \n" Aug 25 01:12:54.360: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe node latest-control-plane' Aug 25 01:12:54.504: INFO: stderr: "" Aug 25 01:12:54.504: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:42:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 25 Aug 2020 01:12:46 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 25 Aug 2020 01:12:20 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 25 Aug 2020 01:12:20 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 25 Aug 2020 01:12:20 +0000 Sat, 15 Aug 2020 09:41:59 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 25 Aug 2020 01:12:20 +0000 Sat, 15 Aug 2020 09:42:31 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.12\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 355da13825784523b4a253c23edd1334\n System UUID: 8f367e0f-042b-45ff-9966-5ca6bcc1cc56\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-f7hdg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 9d\n kube-system coredns-f9fd979d6-vxzgb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 9d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kindnet-qmj2d 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 9d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-proxy-8zfjc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 9d\n local-path-storage local-path-provisioner-8b46957d4-csnr8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 25 01:12:54.504: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config describe namespace kubectl-6270' Aug 25 01:12:54.633: INFO: stderr: "" Aug 25 01:12:54.633: INFO: stdout: "Name: kubectl-6270\nLabels: e2e-framework=kubectl\n e2e-run=3c9f0768-24c9-4b25-8296-6ddebf06d887\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:12:54.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6270" for this suite. • [SLOW TEST:7.242 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":290,"skipped":4762,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:12:54.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 01:12:54.869: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 25 01:12:54.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:12:54.898: INFO: Number of nodes with available pods: 0 Aug 25 01:12:54.898: INFO: Node latest-worker is running more than one daemon pod Aug 25 01:12:55.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:12:55.908: INFO: Number of nodes with available pods: 0 Aug 25 01:12:55.908: INFO: Node latest-worker is running more than one daemon pod Aug 25 01:12:56.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:12:56.907: INFO: Number of nodes with available pods: 0 Aug 25 01:12:56.907: INFO: Node latest-worker is running more than one daemon pod Aug 25 01:12:57.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:12:57.907: INFO: Number of nodes with available pods: 0 Aug 25 01:12:57.907: INFO: Node latest-worker is running more than one daemon pod Aug 25 01:12:59.184: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:12:59.212: INFO: Number of nodes with available pods: 0 Aug 25 01:12:59.212: INFO: Node latest-worker is running more than one daemon pod Aug 25 01:13:00.136: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:00.375: INFO: Number of nodes with available pods: 1 Aug 25 01:13:00.375: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 01:13:01.024: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:01.380: INFO: Number of nodes with available pods: 2 Aug 25 01:13:01.380: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 25 01:13:02.739: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:02.739: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:02.806: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:03.810: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:03.810: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:03.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:04.813: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:04.813: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:04.817: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:05.848: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:05.848: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:05.851: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:07.069: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:07.069: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:07.069: INFO: Pod daemon-set-ckhqj is not available Aug 25 01:13:07.311: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:07.885: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:07.885: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:07.885: INFO: Pod daemon-set-ckhqj is not available Aug 25 01:13:08.124: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:08.810: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:08.810: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:08.810: INFO: Pod daemon-set-ckhqj is not available Aug 25 01:13:08.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:10.124: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:10.124: INFO: Wrong image for pod: daemon-set-ckhqj. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:10.124: INFO: Pod daemon-set-ckhqj is not available Aug 25 01:13:10.129: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:10.824: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:10.824: INFO: Pod daemon-set-qm26m is not available Aug 25 01:13:10.828: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:11.961: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:11.961: INFO: Pod daemon-set-qm26m is not available Aug 25 01:13:11.964: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:13.333: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:13.333: INFO: Pod daemon-set-qm26m is not available Aug 25 01:13:13.337: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:13.919: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:13.919: INFO: Pod daemon-set-qm26m is not available Aug 25 01:13:13.923: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:14.811: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:14.811: INFO: Pod daemon-set-qm26m is not available Aug 25 01:13:14.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:15.823: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:15.835: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:16.873: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:17.215: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:18.082: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:18.082: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:18.579: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:19.169: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:19.169: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:19.172: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:19.810: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:19.810: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:19.813: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:20.810: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:20.810: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:20.813: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:21.810: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:21.810: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:21.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:22.836: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:22.836: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:22.839: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:23.810: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:23.810: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:23.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:24.811: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:24.811: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:24.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:25.810: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:25.810: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:25.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:26.811: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:26.811: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:26.814: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:27.811: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:27.811: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:27.815: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:28.811: INFO: Wrong image for pod: daemon-set-9d6ct. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 25 01:13:28.811: INFO: Pod daemon-set-9d6ct is not available Aug 25 01:13:28.816: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:29.815: INFO: Pod daemon-set-pgtl6 is not available Aug 25 01:13:29.818: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 25 01:13:29.821: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:29.823: INFO: Number of nodes with available pods: 1 Aug 25 01:13:29.823: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 01:13:30.828: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:30.832: INFO: Number of nodes with available pods: 1 Aug 25 01:13:30.832: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 01:13:31.828: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:31.831: INFO: Number of nodes with available pods: 1 Aug 25 01:13:31.831: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 01:13:32.828: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:32.832: INFO: Number of nodes with available pods: 1 Aug 25 01:13:32.832: INFO: Node latest-worker2 is running more than one daemon pod Aug 25 01:13:33.828: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 25 01:13:33.832: INFO: Number of nodes with available pods: 2 Aug 25 01:13:33.832: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-299, will wait for the garbage collector to delete the pods Aug 25 01:13:33.903: INFO: Deleting DaemonSet.extensions daemon-set took: 6.254911ms Aug 25 01:13:34.704: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.26059ms Aug 25 01:13:40.267: INFO: Number of nodes with available pods: 0 Aug 25 01:13:40.267: INFO: Number of running nodes: 0, number of available pods: 0 Aug 25 01:13:40.269: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-299/daemonsets","resourceVersion":"3443862"},"items":null} Aug 25 01:13:40.271: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-299/pods","resourceVersion":"3443862"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:13:40.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-299" for this suite. • [SLOW TEST:45.643 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":291,"skipped":4765,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:13:40.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0825 01:14:21.555718 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 25 01:15:23.573: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 25 01:15:23.574: INFO: Deleting pod "simpletest.rc-427mj" in namespace "gc-4040" Aug 25 01:15:23.623: INFO: Deleting pod "simpletest.rc-9dqfq" in namespace "gc-4040" Aug 25 01:15:23.723: INFO: Deleting pod "simpletest.rc-bx6qb" in namespace "gc-4040" Aug 25 01:15:24.539: INFO: Deleting pod "simpletest.rc-cff7s" in namespace "gc-4040" Aug 25 01:15:24.960: INFO: Deleting pod "simpletest.rc-lwht2" in namespace "gc-4040" Aug 25 01:15:25.120: INFO: Deleting pod "simpletest.rc-lxbf4" in namespace "gc-4040" Aug 25 01:15:25.636: INFO: Deleting pod "simpletest.rc-p7qcb" in namespace "gc-4040" Aug 25 01:15:25.970: INFO: Deleting pod "simpletest.rc-plkwv" in namespace "gc-4040" Aug 25 01:15:26.066: INFO: Deleting pod "simpletest.rc-qrg95" in namespace "gc-4040" Aug 25 01:15:26.370: INFO: Deleting pod "simpletest.rc-zqg8h" in namespace "gc-4040" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:15:26.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4040" for this suite. • [SLOW TEST:106.318 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":292,"skipped":4799,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:15:26.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-730 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 25 01:15:26.999: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 25 01:15:27.371: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 01:15:29.414: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 25 01:15:31.381: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:33.375: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:35.375: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:37.449: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:39.375: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:41.375: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:43.375: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:45.375: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:47.375: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 25 01:15:49.375: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 25 01:15:49.381: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 25 01:15:53.578: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.210 8081 | grep -v '^\s*$'] Namespace:pod-network-test-730 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 25 01:15:53.578: INFO: >>> kubeConfig: /root/.kube/config I0825 01:15:53.602723 7 log.go:181] (0xc000851550) (0xc003bab9a0) Create stream I0825 01:15:53.602746 7 log.go:181] (0xc000851550) (0xc003bab9a0) Stream added, broadcasting: 1 I0825 01:15:53.604470 7 log.go:181] (0xc000851550) Reply frame received for 1 I0825 01:15:53.604499 7 log.go:181] (0xc000851550) (0xc002dc6a00) Create stream I0825 01:15:53.604520 7 log.go:181] (0xc000851550) (0xc002dc6a00) Stream added, broadcasting: 3 I0825 01:15:53.605559 7 log.go:181] (0xc000851550) Reply frame received for 3 I0825 01:15:53.605595 7 log.go:181] (0xc000851550) (0xc003cb01e0) Create stream I0825 01:15:53.605608 7 log.go:181] (0xc000851550) (0xc003cb01e0) Stream added, broadcasting: 5 I0825 01:15:53.606464 7 log.go:181] (0xc000851550) Reply frame received for 5 I0825 01:15:54.690287 7 log.go:181] (0xc000851550) Data frame received for 3 I0825 01:15:54.690345 7 log.go:181] (0xc002dc6a00) (3) Data frame handling I0825 01:15:54.690393 7 log.go:181] (0xc002dc6a00) (3) Data frame sent I0825 01:15:54.690441 7 log.go:181] (0xc000851550) Data frame received for 3 I0825 01:15:54.690469 7 log.go:181] (0xc002dc6a00) (3) Data frame handling I0825 01:15:54.690508 7 log.go:181] (0xc000851550) Data frame received for 5 I0825 01:15:54.690525 7 log.go:181] (0xc003cb01e0) (5) Data frame handling I0825 01:15:54.693043 7 log.go:181] (0xc000851550) Data frame received for 1 I0825 01:15:54.693080 7 log.go:181] (0xc003bab9a0) (1) Data frame handling I0825 01:15:54.693114 7 log.go:181] (0xc003bab9a0) (1) Data frame sent I0825 01:15:54.693177 7 log.go:181] (0xc000851550) (0xc003bab9a0) Stream removed, broadcasting: 1 I0825 01:15:54.693229 7 log.go:181] (0xc000851550) Go away received I0825 01:15:54.693347 7 log.go:181] (0xc000851550) (0xc003bab9a0) Stream removed, broadcasting: 1 I0825 01:15:54.693373 7 log.go:181] (0xc000851550) (0xc002dc6a00) Stream removed, broadcasting: 3 I0825 01:15:54.693385 7 log.go:181] (0xc000851550) (0xc003cb01e0) Stream removed, broadcasting: 5 Aug 25 01:15:54.693: INFO: Found all expected endpoints: [netserver-0] Aug 25 01:15:54.697: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.59 8081 | grep -v '^\s*$'] Namespace:pod-network-test-730 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 25 01:15:54.697: INFO: >>> kubeConfig: /root/.kube/config I0825 01:15:54.729175 7 log.go:181] (0xc006ffc370) (0xc002dc6dc0) Create stream I0825 01:15:54.729197 7 log.go:181] (0xc006ffc370) (0xc002dc6dc0) Stream added, broadcasting: 1 I0825 01:15:54.731034 7 log.go:181] (0xc006ffc370) Reply frame received for 1 I0825 01:15:54.731076 7 log.go:181] (0xc006ffc370) (0xc0039fa000) Create stream I0825 01:15:54.731092 7 log.go:181] (0xc006ffc370) (0xc0039fa000) Stream added, broadcasting: 3 I0825 01:15:54.732141 7 log.go:181] (0xc006ffc370) Reply frame received for 3 I0825 01:15:54.732161 7 log.go:181] (0xc006ffc370) (0xc002dc6e60) Create stream I0825 01:15:54.732175 7 log.go:181] (0xc006ffc370) (0xc002dc6e60) Stream added, broadcasting: 5 I0825 01:15:54.733312 7 log.go:181] (0xc006ffc370) Reply frame received for 5 I0825 01:15:55.830633 7 log.go:181] (0xc006ffc370) Data frame received for 3 I0825 01:15:55.830658 7 log.go:181] (0xc0039fa000) (3) Data frame handling I0825 01:15:55.830672 7 log.go:181] (0xc0039fa000) (3) Data frame sent I0825 01:15:55.830792 7 log.go:181] (0xc006ffc370) Data frame received for 5 I0825 01:15:55.830835 7 log.go:181] (0xc002dc6e60) (5) Data frame handling I0825 01:15:55.831000 7 log.go:181] (0xc006ffc370) Data frame received for 3 I0825 01:15:55.831015 7 log.go:181] (0xc0039fa000) (3) Data frame handling I0825 01:15:55.832433 7 log.go:181] (0xc006ffc370) Data frame received for 1 I0825 01:15:55.832454 7 log.go:181] (0xc002dc6dc0) (1) Data frame handling I0825 01:15:55.832467 7 log.go:181] (0xc002dc6dc0) (1) Data frame sent I0825 01:15:55.832486 7 log.go:181] (0xc006ffc370) (0xc002dc6dc0) Stream removed, broadcasting: 1 I0825 01:15:55.832503 7 log.go:181] (0xc006ffc370) Go away received I0825 01:15:55.832593 7 log.go:181] (0xc006ffc370) (0xc002dc6dc0) Stream removed, broadcasting: 1 I0825 01:15:55.832611 7 log.go:181] (0xc006ffc370) (0xc0039fa000) Stream removed, broadcasting: 3 I0825 01:15:55.832624 7 log.go:181] (0xc006ffc370) (0xc002dc6e60) Stream removed, broadcasting: 5 Aug 25 01:15:55.832: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:15:55.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-730" for this suite. • [SLOW TEST:29.239 seconds] [sig-network] Networking /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4841,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:15:55.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Aug 25 01:15:55.983: INFO: created test-podtemplate-1 Aug 25 01:15:55.989: INFO: created test-podtemplate-2 Aug 25 01:15:55.995: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Aug 25 01:15:56.001: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Aug 25 01:15:56.069: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:15:56.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-790" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":294,"skipped":4850,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:15:56.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2793 STEP: creating service affinity-nodeport in namespace services-2793 STEP: creating replication controller affinity-nodeport in namespace services-2793 I0825 01:15:56.268204 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2793, replica count: 3 I0825 01:15:59.318592 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 01:16:02.318877 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0825 01:16:05.319123 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 25 01:16:05.330: INFO: Creating new exec pod Aug 25 01:16:10.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2793 execpod-affinity7gx89 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Aug 25 01:16:16.125: INFO: stderr: "I0825 01:16:16.061314 3837 log.go:181] (0xc000142370) (0xc000ce6000) Create stream\nI0825 01:16:16.061398 3837 log.go:181] (0xc000142370) (0xc000ce6000) Stream added, broadcasting: 1\nI0825 01:16:16.063613 3837 log.go:181] (0xc000142370) Reply frame received for 1\nI0825 01:16:16.063661 3837 log.go:181] (0xc000142370) (0xc000536000) Create stream\nI0825 01:16:16.063676 3837 log.go:181] (0xc000142370) (0xc000536000) Stream added, broadcasting: 3\nI0825 01:16:16.064822 3837 log.go:181] (0xc000142370) Reply frame received for 3\nI0825 01:16:16.064867 3837 log.go:181] (0xc000142370) (0xc000ce60a0) Create stream\nI0825 01:16:16.064883 3837 log.go:181] (0xc000142370) (0xc000ce60a0) Stream added, broadcasting: 5\nI0825 01:16:16.065932 3837 log.go:181] (0xc000142370) Reply frame received for 5\nI0825 01:16:16.112517 3837 log.go:181] (0xc000142370) Data frame received for 5\nI0825 01:16:16.112544 3837 log.go:181] (0xc000ce60a0) (5) Data frame handling\nI0825 01:16:16.112559 3837 log.go:181] (0xc000ce60a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0825 01:16:16.112632 3837 log.go:181] (0xc000142370) Data frame received for 5\nI0825 01:16:16.112651 3837 log.go:181] (0xc000ce60a0) (5) Data frame handling\nI0825 01:16:16.112661 3837 log.go:181] (0xc000ce60a0) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0825 01:16:16.113318 3837 log.go:181] (0xc000142370) Data frame received for 3\nI0825 01:16:16.113335 3837 log.go:181] (0xc000536000) (3) Data frame handling\nI0825 01:16:16.113359 3837 log.go:181] (0xc000142370) Data frame received for 5\nI0825 01:16:16.113371 3837 log.go:181] (0xc000ce60a0) (5) Data frame handling\nI0825 01:16:16.115524 3837 log.go:181] (0xc000142370) Data frame received for 1\nI0825 01:16:16.115561 3837 log.go:181] (0xc000ce6000) (1) Data frame handling\nI0825 01:16:16.115576 3837 log.go:181] (0xc000ce6000) (1) Data frame sent\nI0825 01:16:16.115588 3837 log.go:181] (0xc000142370) (0xc000ce6000) Stream removed, broadcasting: 1\nI0825 01:16:16.115604 3837 log.go:181] (0xc000142370) Go away received\nI0825 01:16:16.116008 3837 log.go:181] (0xc000142370) (0xc000ce6000) Stream removed, broadcasting: 1\nI0825 01:16:16.116029 3837 log.go:181] (0xc000142370) (0xc000536000) Stream removed, broadcasting: 3\nI0825 01:16:16.116043 3837 log.go:181] (0xc000142370) (0xc000ce60a0) Stream removed, broadcasting: 5\n" Aug 25 01:16:16.125: INFO: stdout: "" Aug 25 01:16:16.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2793 execpod-affinity7gx89 -- /bin/sh -x -c nc -zv -t -w 2 10.106.202.104 80' Aug 25 01:16:16.335: INFO: stderr: "I0825 01:16:16.250694 3855 log.go:181] (0xc00003b970) (0xc00017a8c0) Create stream\nI0825 01:16:16.250752 3855 log.go:181] (0xc00003b970) (0xc00017a8c0) Stream added, broadcasting: 1\nI0825 01:16:16.257641 3855 log.go:181] (0xc00003b970) Reply frame received for 1\nI0825 01:16:16.257685 3855 log.go:181] (0xc00003b970) (0xc0005d8500) Create stream\nI0825 01:16:16.257698 3855 log.go:181] (0xc00003b970) (0xc0005d8500) Stream added, broadcasting: 3\nI0825 01:16:16.258289 3855 log.go:181] (0xc00003b970) Reply frame received for 3\nI0825 01:16:16.258323 3855 log.go:181] (0xc00003b970) (0xc0005d85a0) Create stream\nI0825 01:16:16.258339 3855 log.go:181] (0xc00003b970) (0xc0005d85a0) Stream added, broadcasting: 5\nI0825 01:16:16.258982 3855 log.go:181] (0xc00003b970) Reply frame received for 5\nI0825 01:16:16.319565 3855 log.go:181] (0xc00003b970) Data frame received for 5\nI0825 01:16:16.319595 3855 log.go:181] (0xc0005d85a0) (5) Data frame handling\nI0825 01:16:16.319616 3855 log.go:181] (0xc0005d85a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.202.104 80\nI0825 01:16:16.320505 3855 log.go:181] (0xc00003b970) Data frame received for 5\nI0825 01:16:16.320530 3855 log.go:181] (0xc0005d85a0) (5) Data frame handling\nI0825 01:16:16.320541 3855 log.go:181] (0xc0005d85a0) (5) Data frame sent\nConnection to 10.106.202.104 80 port [tcp/http] succeeded!\nI0825 01:16:16.320963 3855 log.go:181] (0xc00003b970) Data frame received for 3\nI0825 01:16:16.320992 3855 log.go:181] (0xc0005d8500) (3) Data frame handling\nI0825 01:16:16.321185 3855 log.go:181] (0xc00003b970) Data frame received for 5\nI0825 01:16:16.321211 3855 log.go:181] (0xc0005d85a0) (5) Data frame handling\nI0825 01:16:16.322565 3855 log.go:181] (0xc00003b970) Data frame received for 1\nI0825 01:16:16.322661 3855 log.go:181] (0xc00017a8c0) (1) Data frame handling\nI0825 01:16:16.322748 3855 log.go:181] (0xc00017a8c0) (1) Data frame sent\nI0825 01:16:16.322782 3855 log.go:181] (0xc00003b970) (0xc00017a8c0) Stream removed, broadcasting: 1\nI0825 01:16:16.322808 3855 log.go:181] (0xc00003b970) Go away received\nI0825 01:16:16.323177 3855 log.go:181] (0xc00003b970) (0xc00017a8c0) Stream removed, broadcasting: 1\nI0825 01:16:16.323194 3855 log.go:181] (0xc00003b970) (0xc0005d8500) Stream removed, broadcasting: 3\nI0825 01:16:16.323203 3855 log.go:181] (0xc00003b970) (0xc0005d85a0) Stream removed, broadcasting: 5\n" Aug 25 01:16:16.336: INFO: stdout: "" Aug 25 01:16:16.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2793 execpod-affinity7gx89 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.11 32641' Aug 25 01:16:16.566: INFO: stderr: "I0825 01:16:16.475172 3872 log.go:181] (0xc00003a0b0) (0xc0003a6fa0) Create stream\nI0825 01:16:16.475218 3872 log.go:181] (0xc00003a0b0) (0xc0003a6fa0) Stream added, broadcasting: 1\nI0825 01:16:16.476661 3872 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI0825 01:16:16.476690 3872 log.go:181] (0xc00003a0b0) (0xc000229360) Create stream\nI0825 01:16:16.476699 3872 log.go:181] (0xc00003a0b0) (0xc000229360) Stream added, broadcasting: 3\nI0825 01:16:16.477483 3872 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI0825 01:16:16.477506 3872 log.go:181] (0xc00003a0b0) (0xc00070eaa0) Create stream\nI0825 01:16:16.477514 3872 log.go:181] (0xc00003a0b0) (0xc00070eaa0) Stream added, broadcasting: 5\nI0825 01:16:16.478227 3872 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI0825 01:16:16.555338 3872 log.go:181] (0xc00003a0b0) Data frame received for 3\nI0825 01:16:16.555374 3872 log.go:181] (0xc000229360) (3) Data frame handling\nI0825 01:16:16.555405 3872 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0825 01:16:16.555436 3872 log.go:181] (0xc00070eaa0) (5) Data frame handling\nI0825 01:16:16.555458 3872 log.go:181] (0xc00070eaa0) (5) Data frame sent\nI0825 01:16:16.555485 3872 log.go:181] (0xc00003a0b0) Data frame received for 5\nI0825 01:16:16.555509 3872 log.go:181] (0xc00070eaa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.11 32641\nConnection to 172.18.0.11 32641 port [tcp/32641] succeeded!\nI0825 01:16:16.556717 3872 log.go:181] (0xc00003a0b0) Data frame received for 1\nI0825 01:16:16.556814 3872 log.go:181] (0xc0003a6fa0) (1) Data frame handling\nI0825 01:16:16.556829 3872 log.go:181] (0xc0003a6fa0) (1) Data frame sent\nI0825 01:16:16.556902 3872 log.go:181] (0xc00003a0b0) (0xc0003a6fa0) Stream removed, broadcasting: 1\nI0825 01:16:16.556924 3872 log.go:181] (0xc00003a0b0) Go away received\nI0825 01:16:16.557319 3872 log.go:181] (0xc00003a0b0) (0xc0003a6fa0) Stream removed, broadcasting: 1\nI0825 01:16:16.557357 3872 log.go:181] (0xc00003a0b0) (0xc000229360) Stream removed, broadcasting: 3\nI0825 01:16:16.557376 3872 log.go:181] (0xc00003a0b0) (0xc00070eaa0) Stream removed, broadcasting: 5\n" Aug 25 01:16:16.566: INFO: stdout: "" Aug 25 01:16:16.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2793 execpod-affinity7gx89 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32641' Aug 25 01:16:16.779: INFO: stderr: "I0825 01:16:16.700706 3890 log.go:181] (0xc000143810) (0xc0006d8a00) Create stream\nI0825 01:16:16.700871 3890 log.go:181] (0xc000143810) (0xc0006d8a00) Stream added, broadcasting: 1\nI0825 01:16:16.703444 3890 log.go:181] (0xc000143810) Reply frame received for 1\nI0825 01:16:16.703480 3890 log.go:181] (0xc000143810) (0xc000722320) Create stream\nI0825 01:16:16.703502 3890 log.go:181] (0xc000143810) (0xc000722320) Stream added, broadcasting: 3\nI0825 01:16:16.704445 3890 log.go:181] (0xc000143810) Reply frame received for 3\nI0825 01:16:16.704476 3890 log.go:181] (0xc000143810) (0xc0008a03c0) Create stream\nI0825 01:16:16.704495 3890 log.go:181] (0xc000143810) (0xc0008a03c0) Stream added, broadcasting: 5\nI0825 01:16:16.705486 3890 log.go:181] (0xc000143810) Reply frame received for 5\nI0825 01:16:16.769595 3890 log.go:181] (0xc000143810) Data frame received for 5\nI0825 01:16:16.769646 3890 log.go:181] (0xc0008a03c0) (5) Data frame handling\nI0825 01:16:16.769680 3890 log.go:181] (0xc0008a03c0) (5) Data frame sent\nI0825 01:16:16.769699 3890 log.go:181] (0xc000143810) Data frame received for 5\nI0825 01:16:16.769724 3890 log.go:181] (0xc0008a03c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32641\nConnection to 172.18.0.14 32641 port [tcp/32641] succeeded!\nI0825 01:16:16.769765 3890 log.go:181] (0xc0008a03c0) (5) Data frame sent\nI0825 01:16:16.769990 3890 log.go:181] (0xc000143810) Data frame received for 3\nI0825 01:16:16.770018 3890 log.go:181] (0xc000722320) (3) Data frame handling\nI0825 01:16:16.770298 3890 log.go:181] (0xc000143810) Data frame received for 5\nI0825 01:16:16.770329 3890 log.go:181] (0xc0008a03c0) (5) Data frame handling\nI0825 01:16:16.772018 3890 log.go:181] (0xc000143810) Data frame received for 1\nI0825 01:16:16.772031 3890 log.go:181] (0xc0006d8a00) (1) Data frame handling\nI0825 01:16:16.772038 3890 log.go:181] (0xc0006d8a00) (1) Data frame sent\nI0825 01:16:16.772047 3890 log.go:181] (0xc000143810) (0xc0006d8a00) Stream removed, broadcasting: 1\nI0825 01:16:16.772056 3890 log.go:181] (0xc000143810) Go away received\nI0825 01:16:16.772577 3890 log.go:181] (0xc000143810) (0xc0006d8a00) Stream removed, broadcasting: 1\nI0825 01:16:16.772603 3890 log.go:181] (0xc000143810) (0xc000722320) Stream removed, broadcasting: 3\nI0825 01:16:16.772615 3890 log.go:181] (0xc000143810) (0xc0008a03c0) Stream removed, broadcasting: 5\n" Aug 25 01:16:16.779: INFO: stdout: "" Aug 25 01:16:16.779: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config exec --namespace=services-2793 execpod-affinity7gx89 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.11:32641/ ; done' Aug 25 01:16:17.096: INFO: stderr: "I0825 01:16:16.913806 3908 log.go:181] (0xc000bdafd0) (0xc000bd2820) Create stream\nI0825 01:16:16.913886 3908 log.go:181] (0xc000bdafd0) (0xc000bd2820) Stream added, broadcasting: 1\nI0825 01:16:16.917199 3908 log.go:181] (0xc000bdafd0) Reply frame received for 1\nI0825 01:16:16.917264 3908 log.go:181] (0xc000bdafd0) (0xc000e04000) Create stream\nI0825 01:16:16.917279 3908 log.go:181] (0xc000bdafd0) (0xc000e04000) Stream added, broadcasting: 3\nI0825 01:16:16.918339 3908 log.go:181] (0xc000bdafd0) Reply frame received for 3\nI0825 01:16:16.918385 3908 log.go:181] (0xc000bdafd0) (0xc000d9c0a0) Create stream\nI0825 01:16:16.918415 3908 log.go:181] (0xc000bdafd0) (0xc000d9c0a0) Stream added, broadcasting: 5\nI0825 01:16:16.919640 3908 log.go:181] (0xc000bdafd0) Reply frame received for 5\nI0825 01:16:16.990201 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:16.990274 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:16.990302 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:16.990338 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:16.990357 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:16.990387 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:16.995976 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:16.996014 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:16.996047 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:16.996408 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:16.996424 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:16.996441 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:16.996482 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:16.996504 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:16.996530 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.002947 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.002967 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.002983 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.003479 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.003502 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.003531 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.003548 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.003562 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.003581 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.008469 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.008482 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.008489 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.009121 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.009133 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.009143 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.009166 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.009190 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.009223 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.013823 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.013846 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.013860 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.014731 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.014776 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.014809 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.014850 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.014862 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.014880 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.018009 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.018028 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.018047 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.018411 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.018428 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.018435 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.018444 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.018449 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.018457 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.022463 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.022481 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.022495 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.023175 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.023201 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.023213 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.023231 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.023242 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.023251 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.027611 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.027625 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.027635 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.028085 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.028099 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.028106 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.028114 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.028118 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.028123 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.032951 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.032993 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.033028 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.033413 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.033441 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.033453 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.033465 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.033472 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.033480 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.038501 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.038521 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.038538 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.038976 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.038998 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.039013 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.039022 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.039034 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.039041 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.043445 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.043470 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.043504 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.043904 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.043938 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.043973 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.043996 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.044018 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.044040 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.049283 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.049302 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.049317 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.049974 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.049995 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.050003 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.050014 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.050020 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.050027 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.055392 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.055416 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.055431 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.055846 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.055869 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.055890 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.055905 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.055931 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.055973 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.060973 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.060990 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.061005 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.061749 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.061780 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.061799 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.061825 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.061841 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.061859 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.067719 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.067760 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.067895 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.068282 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.068300 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.068310 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.068332 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.068344 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.068359 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.073784 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.073814 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.073835 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.074283 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.074312 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.074347 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.074375 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.074393 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.074423 3908 log.go:181] (0xc000d9c0a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.11:32641/\nI0825 01:16:17.081319 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.081357 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.081398 3908 log.go:181] (0xc000e04000) (3) Data frame sent\nI0825 01:16:17.082374 3908 log.go:181] (0xc000bdafd0) Data frame received for 3\nI0825 01:16:17.082390 3908 log.go:181] (0xc000e04000) (3) Data frame handling\nI0825 01:16:17.082484 3908 log.go:181] (0xc000bdafd0) Data frame received for 5\nI0825 01:16:17.082509 3908 log.go:181] (0xc000d9c0a0) (5) Data frame handling\nI0825 01:16:17.084201 3908 log.go:181] (0xc000bdafd0) Data frame received for 1\nI0825 01:16:17.084239 3908 log.go:181] (0xc000bd2820) (1) Data frame handling\nI0825 01:16:17.084252 3908 log.go:181] (0xc000bd2820) (1) Data frame sent\nI0825 01:16:17.084294 3908 log.go:181] (0xc000bdafd0) (0xc000bd2820) Stream removed, broadcasting: 1\nI0825 01:16:17.084314 3908 log.go:181] (0xc000bdafd0) Go away received\nI0825 01:16:17.084947 3908 log.go:181] (0xc000bdafd0) (0xc000bd2820) Stream removed, broadcasting: 1\nI0825 01:16:17.084970 3908 log.go:181] (0xc000bdafd0) (0xc000e04000) Stream removed, broadcasting: 3\nI0825 01:16:17.084982 3908 log.go:181] (0xc000bdafd0) (0xc000d9c0a0) Stream removed, broadcasting: 5\n" Aug 25 01:16:17.096: INFO: stdout: "\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5\naffinity-nodeport-96zw5" Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Received response from host: affinity-nodeport-96zw5 Aug 25 01:16:17.096: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-2793, will wait for the garbage collector to delete the pods Aug 25 01:16:17.218: INFO: Deleting ReplicationController affinity-nodeport took: 16.164227ms Aug 25 01:16:17.818: INFO: Terminating ReplicationController affinity-nodeport pods took: 600.212037ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:16:30.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2793" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:34.151 seconds] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":295,"skipped":4853,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:16:30.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:16:30.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9113" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":296,"skipped":4866,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:16:30.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 25 01:16:30.368: INFO: >>> kubeConfig: /root/.kube/config Aug 25 01:16:33.860: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:16:46.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7739" for this suite. • [SLOW TEST:15.991 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":297,"skipped":4890,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:16:46.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-caf9d7be-ca54-45d9-beef-63b635015326 in namespace container-probe-8181 Aug 25 01:16:52.378: INFO: Started pod test-webserver-caf9d7be-ca54-45d9-beef-63b635015326 in namespace container-probe-8181 STEP: checking the pod's current state and verifying that restartCount is present Aug 25 01:16:52.380: INFO: Initial restart count of pod test-webserver-caf9d7be-ca54-45d9-beef-63b635015326 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:20:53.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8181" for this suite. • [SLOW TEST:247.243 seconds] [k8s.io] Probing container /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":298,"skipped":4896,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:20:53.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 25 01:20:53.783: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 25 01:20:53.810: INFO: Waiting for terminating namespaces to be deleted... Aug 25 01:20:53.884: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 25 01:20:53.888: INFO: daemon-set-64t9w from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 01:20:53.888: INFO: Container app ready: true, restart count 0 Aug 25 01:20:53.888: INFO: kindnet-gmpqb from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 01:20:53.888: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 01:20:53.888: INFO: kube-proxy-82wrf from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 01:20:53.888: INFO: Container kube-proxy ready: true, restart count 0 Aug 25 01:20:53.888: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 25 01:20:53.892: INFO: daemon-set-jxhg7 from daemonsets-1323 started at 2020-08-21 01:17:50 +0000 UTC (1 container statuses recorded) Aug 25 01:20:53.892: INFO: Container app ready: true, restart count 0 Aug 25 01:20:53.892: INFO: kindnet-grzzh from kube-system started at 2020-08-15 09:42:30 +0000 UTC (1 container statuses recorded) Aug 25 01:20:53.892: INFO: Container kindnet-cni ready: true, restart count 1 Aug 25 01:20:53.892: INFO: kube-proxy-fjk8r from kube-system started at 2020-08-15 09:42:29 +0000 UTC (1 container statuses recorded) Aug 25 01:20:53.892: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f5dfc57c-b2e8-44d2-a376-ea351cb1903b 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-f5dfc57c-b2e8-44d2-a376-ea351cb1903b off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f5dfc57c-b2e8-44d2-a376-ea351cb1903b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:21:16.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-616" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:23.036 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":299,"skipped":4914,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:21:16.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 25 01:21:17.094: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f" in namespace "projected-5122" to be "Succeeded or Failed" Aug 25 01:21:17.272: INFO: Pod "downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 178.237471ms Aug 25 01:21:19.489: INFO: Pod "downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.395273158s Aug 25 01:21:21.493: INFO: Pod "downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398528278s Aug 25 01:21:23.579: INFO: Pod "downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.484873363s STEP: Saw pod success Aug 25 01:21:23.579: INFO: Pod "downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f" satisfied condition "Succeeded or Failed" Aug 25 01:21:23.646: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f container client-container: STEP: delete the pod Aug 25 01:21:24.139: INFO: Waiting for pod downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f to disappear Aug 25 01:21:24.363: INFO: Pod downwardapi-volume-cf7e68fb-4ed4-4996-a1ad-e0afcba94a7f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:21:24.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5122" for this suite. • [SLOW TEST:7.793 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4927,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:21:24.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 25 01:21:24.738: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 25 01:21:27.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5547 create -f -' Aug 25 01:21:39.703: INFO: stderr: "" Aug 25 01:21:39.704: INFO: stdout: "e2e-test-crd-publish-openapi-4062-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 25 01:21:39.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5547 delete e2e-test-crd-publish-openapi-4062-crds test-cr' Aug 25 01:21:40.042: INFO: stderr: "" Aug 25 01:21:40.042: INFO: stdout: "e2e-test-crd-publish-openapi-4062-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 25 01:21:40.042: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5547 apply -f -' Aug 25 01:21:40.948: INFO: stderr: "" Aug 25 01:21:40.948: INFO: stdout: "e2e-test-crd-publish-openapi-4062-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 25 01:21:40.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5547 delete e2e-test-crd-publish-openapi-4062-crds test-cr' Aug 25 01:21:41.279: INFO: stderr: "" Aug 25 01:21:41.279: INFO: stdout: "e2e-test-crd-publish-openapi-4062-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 25 01:21:41.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45453 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4062-crds' Aug 25 01:21:41.565: INFO: stderr: "" Aug 25 01:21:41.565: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4062-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:21:44.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5547" for this suite. • [SLOW TEST:20.155 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":301,"skipped":4931,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 25 01:21:44.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 25 01:21:44.811: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6602 /api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-resource-version 911daf7e-f5c9-45c3-ac85-4761373dc2a1 3445842 0 2020-08-25 01:21:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-25 01:21:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 25 01:21:44.811: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6602 /api/v1/namespaces/watch-6602/configmaps/e2e-watch-test-resource-version 911daf7e-f5c9-45c3-ac85-4761373dc2a1 3445843 0 2020-08-25 01:21:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-25 01:21:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 25 01:21:44.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6602" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":302,"skipped":4934,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} Aug 25 01:21:44.825: INFO: Running AfterSuite actions on all nodes Aug 25 01:21:44.825: INFO: Running AfterSuite actions on node 1 Aug 25 01:21:44.825: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":302,"skipped":4934,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.0-rc.3.71+423b15a76e39c2/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 Ran 303 of 5237 Specs in 7288.912 seconds FAIL! -- 302 Passed | 1 Failed | 0 Pending | 4934 Skipped --- FAIL: TestE2E (7288.99s) FAIL